Addressing Social Media Toxicity: Algorithms Alone Won’t Solve the Problem

Can I address the issue of social media?

MoiraM/Alamy

The impact of social media polarization transcends mere algorithms. Research conducted with AI-generated users reveals that this stems from fundamental aspects of the platform’s operation. It indicates that genuine solutions will require a re-evaluation of online communication frameworks.

Petter Törnberg from the University of Amsterdam and his team created 500 AI chatbots reflecting a diverse range of political opinions in the United States, based on the National Election Survey. Utilizing the GPT-4o Mini Large Languages Model, these bots were programmed to engage with one another on simplified social networks without commercial influences or algorithms.

Throughout five rounds of experiments, each consisting of 10,000 actions, the AI agents predominantly interacted with like-minded individuals. Those with more extreme views garnered greater followership and reposts, increasing visibility for users attracted to more partisan content.

In prior research, Törnberg and his colleagues explored whether different algorithmic approaches in simulated social networks could mitigate political polarization. However, the new findings appear to challenge earlier conclusions.

“We expected this polarization to be largely driven by algorithms,” Törnberg states. “[We thought] the platform is geared towards maximizing engagement and inciting outrage, thus producing these outcomes.”

Instead, they found that the algorithm itself isn’t the primary culprit. “We created the simplest platform imaginable, and yet we saw these results immediately,” he explains. “This suggests that there are deeply ingrained behaviors linked to following, reposting, and engagement that are at play.”

To see if these ingrained behaviors could be moderated or counteracted, the researchers tested six potential interventions. These included time series display only, diminishing the visibility of viral content, concealing opposing viewpoints, amplifying sympathetic and rational content, hiding follower and repost counts, and obscuring profile bios.

Most interventions yielded minimal effects. Cross-partisan engagement shifted only by about 6% or less, while the prominence of top accounts changed by 2-6%, but some modifications, like concealing bios, worsened polarization. While some changes that reduced user inequality made extreme posts more attractive, alterations aimed at softening partisanship inadvertently drew more attention to a small group of elite users.

“Most activities on social media devolve into toxic interactions. The root issues with social media stem from its foundational design, which can accentuate negative human behavior,” states Jess Maddox of the University of Georgia.

Törnberg recognizes that while this experiment simplifies various dynamics, it provides insights into what social platforms can do to curb polarization. “Fundamental changes may be necessary,” he cautions. “Tweaking algorithms and adjusting parameters might not be sufficient; we may need to fundamentally rethink interaction structures and how these platforms shape our political landscapes.”

Topic:

Source: www.newscientist.com

Leave a Reply

Your email address will not be published. Required fields are marked *