TikTok is introducing technology that will automatically delete videos that contain nudity, sexual activity, violence, or anything that violates the company’s safety policy for minors.
Tiktok announced Friday that it will partially automate the review system that filters such videos, as well as graphic content, unlawful behavior, and other content that violates its minors’ safety policy in the US and Canada.
TikTok’s head of US safety, Eric Han, said the move is being made in part to lessen the number of unpleasant videos that its human moderators have to review. He claims that this will allow them to devote more time on videos about hate speech, bullying, and misinformation.
TikTok’s human reviewers evaluated all videos before making removal decisions prior to this move.
In the past, employees of social media behemoths like Facebook have had to suffer with post-traumatic stress disorder as a result of their jobs requiring them to evaluate gruesome content. A former Facebook moderator once sued the company for having to filter out unpleasant information after having to check 1,000 pieces of content per night.
TikTok’s safety team will continue to analyze community reports and appeals in order to remove content that violates its regulations. More frequent infractions may result in an account’s ability to submit a video, remark, or update their profile being suspended for 24 to 48 hours, the company said.
An account would be automatically deleted from the platform if it violated a zero-tolerance policy, such as publishing child sexual abuse material.
Accounts are penalized under the new system based on the number and severity of violations over time. Users will receive an in-app notice that their content violates TikTok’s policies, which could result in their account being banned.
In the first quarter of 2021, TikTok identified and deleted over 8.5 million videos in the US. Thousands of videos could be deleted inadvertently as a result of automatic inspection.
TikTok emphasized that no technology can be completely correct, therefore if a video is removed, creators will be alerted immediately and given a reason. They can then file an appeal against the verdict. The app has a history of biased moderating, and it was recently slammed for removing the intersex hashtag twice.
TikTok said that the automated technique was first tried in other countries where’s it’s reliable, including Brazil and Pakistan. Only 5% of the videos removed by its systems should have been allowed up.
That may seem like a small number, but when you consider how many videos TikTok removes — 8,540,088 in the US in the first three months of 2021 — tens or hundreds of thousands of videos could be removed by mistake.
The great majority of the videos that have been deleted fall into the categories for which TikTok has automated moderation. However, not all of those videos will be redirected to avoid human reviewers. Human moderators will continue to review community reports, appeals, and any films reported by automated systems, a spokesperson noted.
The automation will be implemented “over the next few weeks,” TikTok said.