Last month, Twitter user Qasim Rashid tweeted the following:
Oil & Avg Gas $ June 2008:
Oil & Avg Gas $ Mar 2022:
If you’re blaming anyone but greedy oil companies for their price gouging—you’ve bought into propaganda that hurts you more than anyone else.
— Qasim Rashid, Esq. (@QasimRashid) March 14, 2022
These numbers are not accurate. The average price of West Texas Intermediate crude oil in June 2008 was $134, not $181.58. In March 2022, it was $108, not $99.76. Gas prices were $4.05 in June 2008 and $4.22 in March 2022. So the markup on gasoline has increased modestly since 2008, but not nearly as much as this tweet suggests.
Even so, Rashid’s tweet has racked up 18,000 retweets. As of publication time, it’s still on Twitter.
Tweets like this one are on my mind as I think about Twitter’s Monday announcement that it had accepted a deal for Elon Musk to buy Twitter for $44 billion.
“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” Musk said in the press release announcing the acquisition.
In recent years, Twitter has developed an increasingly elaborate system for removing various types of harmful and low-quality content from Twitter, such as hate speech, vaccine misinformation, and former President Donald Trump’s tweets tacitly endorsing the January 6 insurrection at the US Capitol.
Rashid’s tweet apparently doesn’t run afoul of any of Twitter’s rules. But garden-variety misinformation obviously isn’t helpful to a functioning democracy.
Conversations about this issue tend to break down along now-familiar partisan lines, with folks on the left demanding that social media platforms do more to fight misinformation and hate speech and folks on the right decrying that as censorship. Musk has thrown his weight behind the free-speech side of the argument; there’s little chance that Twitter will do more content moderation with Musk at the helm.
But there are options other than just taking down misinformation or leaving it up. A good starting point would be for Twitter to work harder to not actively promote misinformation. That oil tweet wound up with 18,000 retweets because Twitter is designed to maximize the distribution of highly “engaging” tweets. And engaging tweets are often bad tweets.
The trouble with algorithmic news feeds
When I joined Twitter in 2008, the site showed users every tweet by people they followed in strictly chronological order. In 2016, Twitter introduced a new algorithmic feed that prioritized tweets Twitter thought users were likely to care about. This change met significant resistance from users, and Twitter initially portrayed it as optional. But over time, Twitter has increasingly pushed users to switch. Today, the algorithmic feed is the default view.
It’s easy to see the shift as an innocuous improvement to the user experience. If Twitter knows which tweets I’m likely to find most interesting, why not show those first? But the switch had profound consequences for the kind of platform Twitter would become.
In 2015, I had enough Twitter followers that I could count on every tweet getting at least a few reactions from followers. Some tweets got more reactions than others, and I usually hoped that my tweets would “go viral.” But my main motivation was to share stuff I thought was interesting with my direct followers.
But a few years later, I noticed a growing variation in the level of response to my tweets. If I wrote about a highly engaging topic (say, US politics), I would often get a bunch of likes and some retweets. But if I tweeted about a less exciting topic, engagement would be very low. Sometimes, I’d tweet and get no reactions at all.
The first few times this happened, I wondered if I’d written an especially boring tweet. But now, I think the more likely explanation is that hardly anyone sees these sorts of tweets. Once Twitter’s algorithm decides a tweet isn’t engaging enough, it stops putting the tweet into people’s newsfeeds.
The practical result is that Twitter’s software is “training” all of us on the kind of tweets to write. Nobody prevents us from writing tweets on non-engaging topics, but when we do, it’s like shouting into a void. Over time, we learn to write in a more “engaging” way—which often means writing tweets that are partisan, inflammatory, or pandering to the biases of our existing followers.
And because so much of our public discourse happens on Twitter, I think the move has had a non-trivial impact on our political culture. Twitter is feeding people tweets that confirm their existing biases and make them angry or fearful. When we see tweets from the “other side,” it’s often someone saying something outrageous, accompanied by dunks from our own side. We’re less likely to see tweets that challenge our prejudices or introduce us to topics we didn’t know we were interested in.
This basic insight isn’t new, of course. It has been a common criticism of social media since at least 2010, when author and activist Eli Pariser coined the term “filter bubble” to describe the phenomenon. But the rise of algorithmic feeds over the last decade has made the problem much worse. A common prescription for escaping filter bubbles is to deliberately follow people with ideological views different from your own. But this doesn’t help if Twitter’s algorithm notices you don’t engage very much with their tweets and stops showing them to you.