Tech

Welcome to TikTok’s endless cycle of censorship and mistakes

Published

on


It’s not necessarily a surprise that these videos make news. People make their videos because they work. Getting views has been one of the more effective strategies to push a big platform to fix something for years. Tiktok, Twitter, and Facebook have made it easier for users to report abuse and rule violations by other users. But when these companies appear to be breaking their own policies, people often find that the best route forward is simply to try to post about it on the platform itself, in the hope of going viral and getting attention that leads to some kind of resolution. Tyler’s two videos on the Marketplace bios, for example, each have more than 1 million views. 

“Content’s getting flagged because they are someone from a marginalized group who is talking about their experiences with racism. Hate speech and talking about hate speech can look very similar to an algorithm.”

Casey Fiesler, University of Colorado, Boulder

“I probably get tagged in something about once a week,” says Casey Fiesler, an assistant professor at the University of Colorado, Boulder, who studies technology ethics and online communities. She’s active on TikTok, with more than 50,000 followers, but while not everything she sees feels like a legitimate concern she says the app’s regular parade of issues is real. TikTok has had several such errors over the past few months, all of which have disproportionately impacted marginalized groups on the platform. 

MIT Technology Review has asked TikTok about each of these recent examples, and the responses are similar: after investigating, TikTok finds that the issue was created in error, emphasizes that the blocked content in question is not in violation of their policies, and links to support the company gives such groups.

The question is whether that cycle—some technical or policy error, a viral response and apology—can be changed. 

Solving issues before they arise

“There are two kinds of harms of this probably algorithmic content moderation that people are observing,” Fiesler says. “One is false negatives. People are like, ‘why is there so much hate speech on this platform and why isn’t it being taken down?’” 

The other is a false positive. “Their content’s getting flagged because they are someone from a marginalized group who is talking about their experiences with racism,” she says. “Hate speech and talking about hate speech can look very similar to an algorithm.”  

Both of these categories, she noted, harm the same people: those who are disproportionately targeted for abuse end up being algorithmically censored for speaking out about it. 

TikTok’s mysterious recommendation algorithms are part of its success—but its unclear and constantly changing boundaries are already having a chilling effect on some users. Fiesler notes that many TikTok creators self-censor words on the platform in order to avoid triggering a review. And although she’s not sure exactly how much this tactic is accomplishing, Fielser has also started doing it, herself, just in case. Account bans, algorithmic mysteries, and bizarre moderation decisions are a constant part of the conversation on the app.