Twitter staff struggles with misinformation after mass layoffs

0

Recent mass layoffs just days before the US midterm elections, left Twitter struggling to respond to political misinformation and other harmful posts on the social media platform, according to employees who survived the cuts and an outside voting rights group.

Elon Musk fired 15% of those frontline content-moderation workers and roughly 50% job cuts companywide, an executive said last week.

Employees said in preparation for the layoffs the company also sharply reduced how many employees can look into a specific account’s digital history and behaviour – a practice necessary to investigate if it’s been used maliciously and take action to suspend it.

The company said it froze access to those tools to reduce “insider risk” at a time of transition.

The developments are causing concern as the US midterm elections culminate on Tuesday (Nov 8). Though millions of Americans have already cast early and absentee ballots, millions more are expected to go to the polls to cast in-person votes. Election watchers fear the platform may not be equipped to handle hate speech, misinformation that could impact voter safety and security, and actors seeking to cast doubt on the legitimate winners of elections around the country.

Researchers tracking misinformation ahead of the midterms notified Twitter on Friday about three posts from well-known far-right figures that advanced debunked claims about election fraud. The posts remain up three days later. When Common Cause asked Twitter for an update on Monday, the platform said the posts were “under review.”

Before Musk took over, Twitter responded much more quickly, said Jesse Littlewood, vice president for campaigns at Common Cause. The group said they had been in regular contact with Twitter staff before Musk took over. Now, they are getting a response from a generic email address.

“We had been getting much faster decisions from them, sometimes within hours,” said Littlewood. Now, he said, “It’s like pushing the button for the walk sign at the stop light, and nothing is happening.”

Musk gutted teams working on marketing, communications and editorial curation of what people see on Twitter. But his decision to retain most of Twitter’s content moderation team came as a welcome surprise to some inside and outside the company. Musk, after all, promised to let free speech flourish by loosening Twitter’s content restrictions and restoring accounts banned for violating those rules. He has also pledged to end the current user verification system in favour of a US$7.99 subscription fee.

But the fact that the content moderation team survived could mean that critical misinformation functions such as blocking incitements of political violence will continue, and some of the worst-case scenarios around election misinformation won’t be realised. Some of Musk’s own tweets have been annotated with fact-checked context in recent days.

Two employees who survived the job cuts credit a previously little-known executive Yoel Roth, Twitter’s global head of safety and integrity, for leveraging his team’s importance to Musk’s goals for Twitter while avoiding moves that might anger the mercurial Tesla CEO.

“Yoel Roth singlehandedly saved the company,” said a Twitter employee who spoke on condition of anonymity because of concerns about job security. “On the public side, he appropriately and thoughtfully engaged with Elon Musk in a way that was not subservient, but deferring, because Elon is the king.”

Roth has become the public face of Twitter’s content moderation since Musk took over and has regularly defended Twitter’s ongoing efforts to fight harmful misinformation. Musk, a prolific tweeter with more than 110 million followers, has frequently pointed to Roth’s Twitter feed as the most reliable account of the company’s adherence to integrity standards. And the billionaire, who embraces the idea that Twitter’s past leadership suppressed right-wing views, defended Roth when ardent Musk supporters demanded his firing over past comments they thought showed Roth’s liberal bias.

Roth, who once worked at an Apple store fixing Mac computers, joined Twitter in 2015 after spending a year studying online hate speech at Harvard University’s Berkman Klein Center for Internet and Society, according to his LinkedIn profile. In May, he took on a senior role “responsible for all user, content, and security policies, comprising more than 120 policymakers, threat investigators, data analysts, and operations specialists.”

Roth didn’t respond to requests for comment. (Source: CNA)

Share.