wWhile I browse social media, I often feel disheartened by the overwhelming negativity, as if the world is ablaze with hatred. Yet, stepping into the streets of New York City for a coffee or lunch with friends presents a stark contrast—everything feels calm. This disparity between the digital realm and my everyday life is jarring.
My work addresses issues like intergroup conflict, misinformation, technology, and climate change, highlighting humanity’s challenges. Interestingly, online discussions mirror fervor over events such as the White Lotus finale and the most recent YouTuber scandal. Everything seems either exaggeratedly amazing or utterly terrible. But is that truly how most of us feel? No. Recent research indicates that the online environment is skewed by a tiny, highly active user base.
In a paper I co-authored with Claire Robertson and Carina Del Rosario, we found significant evidence that social media does not neutrally represent society; instead, it acts as a fanhouse mirror amplifying extreme voices while obscuring more moderate and nuanced perspectives. Much of this distortion stems from a small percentage of overactive online users, where just 10% of users generate about 97% of political tweets.
Take Elon Musk’s own Platform X as a case in point. Despite its vast user base, a select few create the majority of political content. For instance, Musk tweeted 1,494 times within the first 15 days of implementing government efficiency cuts (DOGE). His prolific posting often spread misinformation to 221 million followers.
On February 2nd, he claimed, “Did you know that USAID used your taxes to kill millions in a funded bioweapon study, including Covid-19?” This fits a pattern of misinformation dissemination by a small number of users, where just 0.1% share 80% of false news. Twelve accounts, dubbed the “disformation dozens,” were responsible for much of the vaccine misinformation seen on Facebook during the pandemic, creating a misleading perception of vaccine hesitancy.
Similar trends can be identified across the digital landscape. While a small faction engages in toxic behaviors, they disproportionately share hostile or misleading content on various platforms, from Facebook to Reddit. Most individuals do not contribute to fueling the online outrage; however, superusers dominate our collective perception due to their visibility and activity.
This leads to broader societal issues, as humans form mental models of what they perceive others think, shaping social norms and group dynamics. Unfortunately, on social media, this shortcut can misfire. We encounter not a representative sampling of views, but rather an extreme flow of emotionally charged content.
Consequently, many individuals mistakenly believe society is much more polarized and misinformed than it is. I tend to view those across generational gaps, political divisions, or fandoms as radical, malicious, or simply foolish. Our information diets are shaped by a sliver of humanity that incessantly posts about their work, identity, or obsessions.
Such distortion fosters pluralistic ignorance, affecting actions based on a misinterpretation of collective beliefs and behaviors. Think of voters who only witness outrage-driven narratives, leading them to assume there’s no common ground on issues like immigration and climate change.
Yet, the challenge isn’t solely about extremists—it’s the design and algorithms of these platforms that exacerbate the situation. Built to boost engagement, these algorithms favor sensational or divisive content, promoting users who are most likely to skew shared realities.
The issue is compounding. Imagine a bustling restaurant where soon it seems everyone is shouting. The same dynamics play out online, with users exaggerating their views to capture attention and approval. Even those who might not typically be extreme may mirror such behavior in order to gain traction.
Most of us are not diving into trolling battles on our phones; we’re preoccupied with family, friends, or simply seeking lighthearted entertainment online. Yet, our voices are overshadowed. We have effectively surrendered the mic to the most divisive individuals, allowing them to dictate norms and actions.
With over 5 billion people engaging on social media, this technology is here to stay. However, the toxic dynamics I’ve described don’t have to prevail. The initial step is recognizing this illusion and understanding that a silent majority often exists behind every heated thread. As users, we can take back control by curating our feeds, avoiding anger traps, and ignoring sensational content. Consider it akin to adopting a healthier, less processed informational diet.
In a recent series of experiments, we compensated participants to unlock the most divisive political narratives in X. A month later, they reported 23% less hostility towards opposing political groups. Their experiences were so positive that nearly half chose not to return to their hostile narratives post-study. Furthermore, those who nurtured a healthier news feed reported diminished hostility even 11 months later.
Platforms can easily adjust algorithms to avoid highlighting the most outrageous voices, instead prioritizing more balanced or nuanced content. This is what most people desire. The Internet is a powerful tool that can provide value. However, if we continue to reflect only a distorted funhouse version of reality shaped by extreme users, we will all face the repercussions.
Jay Van Bavel is a psychology professor at New York University.
Further Reading
The Righteous Mind by Jonathan Haidt (Penguin, £12.99)
Going Mainstream by Julia Ebner (Ithaca, £10.99)
Chaos Machine by Max Fisher (Quercus, £12.99)
Source: www.theguardian.com












