By its very nature, TikTok is harder to moderate than many other social networking platforms, according to Cameron Hickey, project director at the Institute for Algorithmic Transparency. The brevity of the videos and the fact that many may include audio, visual, and text elements make human discern even more necessary when deciding whether something violates the rules of the platform. Even advanced artificial intelligence tools, such as using voice-over text to quickly identify problematic words, are more difficult “when the audio you’re dealing with also has music behind it,” says Hickey. “The default mode for people who create content on TikTok is to embed music as well.”
This is made even more difficult in languages other than English.
“What we generally know is that platforms work best when dealing with problematic content in the places where they are based or within the languages spoken by the people who created them,” says Hickey. “And there are more people doing bad things than people in these companies trying to get rid of bad things.”
Many pieces of misinformation Madung found were “synthetic content,” videos created to look like they might be from an old news broadcast or use screenshots that appear to be legitimate news media.
“Since 2017, we’ve noticed that there was a growing trend at the time to appropriate the identities of major media brands,” says Madung. “We are seeing unbridled use of this tactic on the platform and it seems to be working exceptionally well.”
Madung also spoke with former TikTok content moderator Gadear Ayed to better understand the company’s moderation efforts more broadly. Although Ayed did not moderate Kenyan TikToks, he told Madung that he was often asked to moderate content in languages or contexts he did not know and would not have had the context to know if a medium had been manipulated.
“It’s common to find moderators being asked to moderate videos that were in languages and contexts other than what they understood,” Ayed told Madung. “For example, at one point I had to moderate videos that were in Hebrew even though I didn’t know the language or the context. All I could trust was the visual image of what I could see, but anything written that I couldn’t moderate. “
A TikTok spokesman told WIRED that the company bans electoral misinformation and the promotion of violence and is “committed to protecting the integrity of [its] platform and have a dedicated team working to safeguard TikTok during the Kenyan elections. “The spokesman also said he was working with fact-finding organizations, including Agence France-Presse in Kenya, its “community with authorized information on Kenyan elections in our application.”
But even if TikTok removes offensive content, Hickey says it may not be enough. “One person can remix, duet, share someone else’s content,” Hickey says. This means that even if the original video is removed, other versions may still go undetected. TikTok videos can also be downloaded and shared on other platforms, such as Facebook and Twitter, and this is how Madung found some of them.
Several of the videos marked in the Mozilla Foundation report have since been removed, but TikTok did not answer questions about whether to delete other videos or whether the videos themselves were part of a coordinated effort.
But Madung suspects they could be. “Some of the most egregious hashtags were things I would find researching coordinated Twitter campaigns, and then I thought, what if I searched for this on TikTok?”