Ziggi Tyler is part of TikTok’s Creator Marketplace, a private platform where brands can connect with the app’s top creators. And last week, he noticed something pretty disturbing about how the creator bios there were being automatically moderated.
When he tried to enter certain phrases in his bio, some of them—“Black lives matter,” “supporting black people,” “supporting black voices,” and “supporting Black success”—were flagged as inappropriate content. But white versions of the same phrases were acceptable.
“If I go into the Creator Marketplace and put ‘supporting white supremacy’ and hit accept, it’s OK,” he said in a TikTok video, standing in front of what he said was a video capture of him doing just that on his own phone. To get listed on the marketplace, he had to delete what he’d written.
In a follow up video, Tyler showed the phrases “I am a neo nazi” and “I am an anti semetic” getting accepted, while “I am a black man” was flagged. The story blew up, driven in part by an audience that was already frustrated by the way in which Black creators are treated on TikTok. In a comment to Insider, a company spokesperson apologized for the “significant” error and said that what Tyler was seeing was a result of an automatic filter set to block words associated with hate speech. The system, it said, was “erroneously set to flag phrases without respect to word order.” The company told recode that this particular error came from the inclusion of the words “Black” and “audience,” because its hate speech detection picked up on the “die” in “audience” and flagged the pairing as inappropriate.
Going public, going viral
It’s far from the only such incident to take place on the platform. A few weeks ago, TikTok erroneously applied what appeared to be a mandatory beauty filter that created a slimmer jawline for some Android users.
In April, meanwhile, one TikTok creator noticed that the platform’s new auto captioning feature was blocking the phrase “Asian women.” And in early 2021, intersex creators found that the #intersex hashtag was no longer discoverable on the app.
These are all examples of a pattern that repeats on TikTok: first a creator notices a bizarre and potentially harmful issue with the platform’s moderation or algorithm, one that often disproportionately impacts marginalized groups. Then they make a video about what’s going on, and that video gets a lot of views. Eventually, perhaps, a journalist gets interested in what’s going on, asks the company to explain, and the issue is fixed. TikTok then releases a statement saying that the problem was the result of an error, and emphases the work they do to support the creators and causes affected.
It’s not necessarily a surprise that these videos make news. People make their videos because they work. Getting views has been one of the more effective strategies to push a big platform to fix something for years. Tiktok, Twitter, and Facebook have made it easier for users to report abuse and rule violations by other users. But when these companies appear to be breaking their own policies, people often find that the best route forward is simply to try to post about it on the platform itself, in the hope of going viral and getting attention that leads to some kind of resolution. Tyler’s two videos on the Marketplace bios, for example, each have more than 1 million views.
“I probably get tagged in something about once a week,” says Casey Fiesler, an assistant professor at the University of Colorado, Boulder, who studies technology ethics and online communities. She’s active on TikTok, with more than 50,000 followers, but while not everything she sees feels like a legitimate concern she says the app’s regular parade of issues is real. TikTok has had several such errors over the past few months, all of which have disproportionately impacted marginalized groups on the platform.
MIT Technology Review has asked TikTok about each of these recent examples, and the responses are similar: after investigating, TikTok finds that the issue was created in error, emphasizes that the blocked content in question is not in violation of their policies, and links to support the company gives such groups.
The question is whether that cycle—some technical or policy error, a viral response and apology—can be changed.
Solving issues before they arise
“There are two kinds of harms of this probably algorithmic content moderation that people are observing,” Fiesler says. “One is false negatives. People are like, ‘why is there so much hate speech on this platform and why isn’t it being taken down?’”
The other is a false positive. “Their content’s getting flagged because they are someone from a marginalized group who is talking about their experiences with racism,” she says. “Hate speech and talking about hate speech can look very similar to an algorithm.”
Both of these categories, she noted, harm the same people: those who are disproportionately targeted for abuse end up being algorithmically censored for speaking out about it.
TikTok’s mysterious recommendation algorithms are part of its success—but its unclear and constantly changing boundaries are already having a chilling effect on some users. Fiesler notes that many TikTok creators self-censor words on the platform in order to avoid triggering a review. And although she’s not sure exactly how much this tactic is accomplishing, Fielser has also started doing it, herself, just in case. Account bans, algorithmic mysteries, and bizarre moderation decisions are a constant part of the conversation on the app.
Worse still, many of these errors, Fiesler argued, would be easy to predict if companies simply thought more about how different users would actually interact with their app. The hate speech bug encountered by Tyler would be hard to miss if the company tested it on the language Black creators actually use to describe themselves, for instance.
Getting moderation right is complex, Feisler says, but that doesn’t excuse this constant repetition.
“I’m often more sympathetic to the challenges of these things than most people are,” she says. “But even I’m like, ‘really?’ At some point there’s patterns, and you should know what to look for.”
I asked Fiesler about whether anything could slow down this cycle, even if it’s impossible to solve outright at this point. Part of the answer is one of the most longstanding stories in tech: hire, and listen to, people from a diversity of backgrounds.
But, she says, greater transparency also has to be part of the solution. Fiesler can’t say exactly why TikTok’s hate speech detector went so wrong last week because the app hasn’t released very much information on how it works—not to researchers like her, and not to the users whose content is removed or suppressed by automated processes that, history demonstrates, can easily go wrong. As a journalist, when I’ve asked TikTok (and at other times, other platforms) for more detailed explanations about why something they’re calling a bug happened, and what they’re doing to prevent it from recurring in the future, the answer is brief, unsatisfactory, or the company declines to comment on the record altogether.
“If creators are constantly getting stuff inappropriately flagged then there needs to be really easy appeals processes,” she says.
Meanwhile: the cycle will continue.