TikTok removed over 81 million videos in 3 months for rules violation

TikTok launched this Wednesday (13) the Q2 Community Guidelines Application Report, which focuses on providing details on removed content and banned accounts from the platform. The social network took the opportunity to also update the tools that deal with the protection of the user against abusive behavior.

  • Why China limited the TikTok a 24 daily minutes for children?
  • TikTok copies resources from Instagram, Twitch and YouTube to improve their lives
  • TikTok launches global campaign to fight bullying and encourage kindness

Between April and June, 81.334.488 videos have been removed worldwide for violating TikTok’s community guidelines or terms of service. This amount, although it seems high, represents about 1% of all material uploaded to the platform in the period, and most of them did not even impact the user:

  • 81% of this total was taken down on 40 hours or less;
  • 94,1% removed even before any report; and
  • 87,5% of removed content had zero views

    This is an important metric for social media because it reveals the effectiveness of the detection algorithms prohibited content, which prevents people from being affected by something they don’t need to see. Among these contents are behaviors related to hate, bullying and harassment – in the latter, 87,3% was taken off the air before any denunciation, compared to 73,2% in the first quarter of this year.

    Want to catch up on the best tech news of the day? Go and subscribe to our new channel on youtube, Canaltech News.

    Every day a summary of the main news in the tech world for you!

    The content identification algorithm is more efficient than last semester (Image: Reproduction/TikTok)

    Brazil was the third country with the most videos removed (7.488.518) and is second only to the United States and Pakistan. There is no detail on the types of violations most committed here, but it is very likely that all of the above are covered.

    The Chinese social network explains that these better rates are the result of tools that proactively flagged symbols related to hate, inappropriate words, name-calling, or other signs of abuse submitted for review by security teams. Using expressions typical of this niche by itself is not necessarily a violation of policy, so the challenge, according to TikTok, is to know when these terms are associated with inappropriate videos — bullying itself, in general, because it requires additional context to be understood as such.

    Combat Tools

  • The growing number of users brings with it the need for tools to combat abusive practices or improve the experience of people, especially teenagers, on the platform. Today, the option to silence comments and questions during live broadcasts was announced: the host or his trusted helper will be able to apply penalties to people who break the rules, from a few seconds or until the end of the live.

    Moderators will be able to silence comments and questions in lives (Image: Divulgação/TikTok)

    A live hosts could already disable or limit potentially harmful comments with a configurable keyword filter. With these additions, the idea is to make the broadcasts a more welcoming place for everyone, from the content creator to the person who just wants to relax and have fun.

    TikTok launched in recent months several features in this regard, such as the ability to delete several comments simultaneously and blocking new interactions. The network also sets the under-age profile 40 to private and prohibits receiving direct messages from unknown adults. There is an option that alerts the commenter to inappropriate terms, which gives them a chance to rethink their rude or policy violating content.

    Source: TikTok

    Did you like this article?

    Subscribe your email on Canaltech to receive daily updates with the latest news from the world of technology.

    Related Articles

    Back to top button