The Moderator and the Troll: content moderation in the age of Elon Musk
ALSO: the inside story of how I got, and will now lose, my blue tick on Twitter
Are you bored of me and everyone you know wanging on about Elon Musk and Twitter? Well, park that boredom for a second, because on today’s Future Proof we’re talking about content moderation.
Content moderation is one of those media issues that’s both very important and very dry. It heavily impacts the way that brands emerge into the world – sites with lighter touch content moderation, like, say, Reddit, have a distinct identity from those with a more heavily curated style, like Wikipedia. But in the middle there’s the great sea of social media, from the shores of TikTok to the banks of the great river Facebook. These are technology products, populated by user created content, which poses a huge central question: who is responsible for that content? The platform? Or the creator?
There is a deep streak of libertarianism in the internet, one that is currently rising closer to the surface. Think how many evangelists of our technological future have championed that freeing, self-governing, ideology, from PayPal founder Peter Thiel to Wikiman Jimmy Wales, who chose Ayn Rand’s The Fountainhead as his book of choice on Desert Island Discs. And with Musk’s acquisition of Twitter, and a push to ensure “free speech” on the platform (whatever that means), content moderation is a sexy topic of dinner party chitchat again.
So I dialled up content moderation expert – and writer of the superb Everything in Moderation newsletter, which you must subscribe to – Ben Whitelaw, to ask a few questions about the current brouhaha and where things are going in the world of content moderation.
Why are people talking about content moderation right now?
In short because of Elon Musk. The new owner of Twitter is seemingly figuring out his views on speech in real-time while the world (by which I mean politicos and media types) watches on with bated breath. In the space of just over a week, he's threatened to form a moderation council (much like Meta's Supreme Court-style Oversight Board), vowed to make the platform "the most accurate source of information about the world" (huh?) and run a poll in which the only two answers were "freedom of speech" and "politcal correctness". The man is playing both moderator and troll and he's showing no sign of stopping either. And, despite an increase in hate speech since he took over which threatens the human rights of millions around the world, the self-professed Technoking duly went ahead and culled 15% of the company's Trust and Safety team. It's almost too improbable to be true.
What are people’s anxieties about the Musk takeover of Twitter?
Everyone is worried about something and with good reason; activists and policy professionals worry about the rise of misinformation; cybersecurity folks are concerned about the hacking or leaking of sensitive data and government officials are nervous about the close proximity of Chinese and Saudi Arabian nationals to the deal. And then there are the celebs, journos and politicos—certainly the loudest of them all—who are upset about their very important blue check becoming subject to an $8 a month surcharge as part of Musk's efforts to raise cash and "defeat the bots and trolls" (don't ask me how).
There's also the fact that Birdwatch, its volunteer community of note-takers designed to fact check tweets, is in the balance after Musk reportedly clashed with the team last week. The programme has been almost two years in the making and was only recently rolled out to US users after showing promise. Its shuttering would sound the death knell on Twitter as a safe fun and fascinating place to be online.
What have been twitter’s historic issues with content moderation?
Twitter's list of moderation controversies is longer than the list of Musk's alleged children: just off the top of my head, there's political shadowbanning, Covid-19 misinformation, the suspension of senator Majorie Greene Taylor, warning labels (including one for "hacked materials" following the Hunter Biden leaks), the verification of white supremacists, outsourced moderator mistreatment (along with most platforms, in fairness), and, you won't need reminding, the suspension of Donald Trump.
Despite all that, Twitter actually has a pretty good reputation in trust and safety circles for its work keeping users safe over the last decade and half. The company has a reputation for pushing back against governments seeking to hide or pull down posts it doesn't like, most notably in the case of the Indian government in May earlier this year and Vijaya Gadde, Twitter's former head of legal, policy and trust, was renowned for going into bat for free expression (ironic considering Musk's stand) more times than almost anyone else. Who knows what will happen now. 280-character answers on a postcard.
Who would you recommend to keep tabs on the topic over the coming weeks?
It doesn’t look like Musk’s myopic views on moderation are going away so it’s worth getting genned up. I’ve created a Twitter list that includes a wide range of practitioners and experts in the online safety space which might be useful for your readers but I’d draw special attention to Jillian C York (Electronic Frontier Foundation), Daphne Keller (Stanford), Juliet Shen (Grindr), Evelyn Douek (er also Stanford), Kat Lo (Meedan), Julie Owono (Meta’s Oversight Board) and Mike Masnick (Techdirt), all of whom I’ve learnt a lot from in the four and a bit years I’ve been writing Everything in Moderation.
Want my potted thoughts on content moderation? Well, they’re below the paywall. See you later cheapskates! *whizzes off like Bart Simpson on a skateboard*
Keep reading with a 7-day free trial
Subscribe to Future Proof to keep reading this post and get 7 days of free access to the full post archives.