The social media company that kept President Donald Trump posting non-stop for years has turned around this year. Amid the election consequences, it interfered with the president's tweets about election fraud, postal voting, and unsubstantiated claims that he had won the election.
Now the outgoing president is finding it difficult to post on his favorite website. Trump's infamous Twitter account is a shadow of his former self, full of labels, warnings, and interstitials, thanks to Twitter's recent moves to flag inaccurate tweets and belittle unproven claims that violate company policies. (Since election night, 28 of Trump's tweets – plus a number of retweets – have either been restricted or flagged according to Twitter's guidelines.)
"(The decisions to restrict tweets are easy) because there is academic, journalistic, and public consensus about the absence of electoral fraud and how our elections work," said Shannon McGregor, assistant professor and senior researcher at the University of North Carolina, Chapel Hill, said Adweek. "Compared to more partisan subjects, these calls are relatively easy to make."
Trump's commitment is key to his strategy. This also distinguishes Twitter from other platforms such as Facebook and YouTube, according to industry experts and scientists.
Like other platforms, Twitter uses a combination of technical and human review to moderate content. While Facebook and YouTube put contextual labels on the president's untrue posts so users could get additional information from trusted sources, the companies did not reduce the algorithmic distribution of the posts.
"For social media, the real power lies in spreading these organic messages," said Cuihua (Cindy) Shen, associate professor of communications at the University of California at Davis. "In that regard, I think Twitter does a much better job than Facebook at creating friction and preventing the organic spread of misinformation."
Facebook did not immediately return a request to clarify whether algorithmic distribution will be throttled for flagged posts. YouTube spokesman Ivy Choi said it periodically reduces the distribution of "borderline content" and removes misleading content to vote on a regular basis, despite not saying whether the president's account is affected this week.
Twitter has been on this path for a while.
Earlier this year, the platform adopted and started enforcing new guidelines that were inconsistent with the American President's online antics. The problem came to a head in early summer at the height of the George Floyd protests. Twitter and Facebook had different views on whether the president's posts posed a real threat of violence. The aftermath of Facebook's laissez-faire approach resulted in a massive boycott of advertisers and prompted the platform to pursue approaches more similar to Twitter's. (The decision-making and influence of Twitter was part of the reason Adweek CEO named Jack Dorsey Digital Executive of the Year.)
After a summer of flagging and restricting Trump's illegal tweets that violated their civic integrity guidelines, including glorifying violence and spreading Covid-19 misinformation, Twitter sharpened its blade ahead of Election Day. The company added false claims about electoral integrity and premature victory claims to its civil integrity policy.
Despite criticism for not curbing the spread of misinformation, the context labels used by social media companies can be helpful.
"There is some academic evidence in the field to suggest that people respond to the kind of fact-checking or interstitial that (Facebook) created," said Emily K. Vraga, associate professor at the University of Minnesota. Vraga said that displaying factual information from trusted sources after a user sees misinformation "reduces their misperception (and brings them closer to the truth").