AI’s Impact on Voter Sentiment: Implications for Democracy

AI chatbots may have the potential to sway voter opinions

Enrique Shore / Alamy

Could the persuasive abilities of AI chatbots signal the decline of democracy? A substantial study investigating the impact of these tools on voter sentiments revealed that AI chatbots surpass traditional political campaign methods, such as advertisements and pamphlets, in persuasiveness, rivaling seasoned campaigners as well. However, researchers see reasons for cautious optimism regarding how AI influences public opinion.

Evidence shows that AI chatbots, like ChatGPT, can migrate the beliefs of conspiracy theorists, winning converts to more reasonable positions and attracting support during human debates. This capability raises valid worries about AI possibly skewing the digital scales that determine election results or being misused by malicious entities to manipulate users towards certain political figures.

The concerning part is that these fears have merit. A survey involving thousands of voters who participated in recent elections in the US, Canada, and Poland found that David Rand and researchers at MIT discovered that AI chatbots effectively swayed individuals to back specific candidates or alter their stance on certain issues.

“Conversations with these models can influence attitudes towards presidential candidates—contributions often deemed deeply entrenched—more than previous studies would suggest,” Rand remarks.

In their American election analysis, Rand’s team surveyed 2,400 voters, asking them about the most significant policy issues or characteristics of a potential president. Subsequently, voters rated their preferences for the leading candidates, Donald Trump and Kamala Harris, on a 100-point scale and answered additional questions to clarify their choices.

The answers were inputted into a chatbot, such as ChatGPT, with the objective of persuading the voters to support an already favored candidate or switch their support to a less favored one. The interaction took about six minutes, consisting of three question-and-answer exchanges.

Following the AI interaction and a one-month follow-up, Rand’s team discovered that voters adjusted their candidate preferences by an average of 2.9 points.

Furthermore, the researchers examined AI’s capacity to influence views on specific policies and noticed a substantial change in opinions regarding the legalization of psychedelics, shifting voter support by approximately 10 points. In comparison, video ads impacted views by only about 4.5 points, and text ads swayed opinions by merely 2.25 points.

The magnitude of these findings is remarkable. Sasha Altai of the University of Zurich stated, “These effects are considerably larger than those typically observed with traditional political campaigning and are comparable to the influence stemming from expert discussions.”

Nevertheless, the study reveals a more hopeful insight: these persuasive interactions predominantly stemmed from fact-based arguments rather than personalized content, which tends to exploit users’ personal information available to political operatives.

Another study of approximately 77,000 individuals in the UK assessed 19 extensive language models across 707 distinct political issues, concluding that AI performed best when employing fact-based arguments, as opposed to tailoring its discussions to the individual.

“Essentially, it’s about creating a compelling argument that prompts a mindset shift,” Rand explains.

“This bodes well for democracy,” notes Altai. “It indicates that individuals are often more influenced by factual evidence than by personalized or manipulative strategies.”

There is a need for further research to confirm these findings, asserts Claes de Vries at the University of Amsterdam. He adds that if replicated, the controlled environments of these studies—where participants engaged with chatbots extensively—might differ significantly from individuals’ typical political interactions with friends or colleagues.

“The structured setting of interaction about politics with a chatbot is quite different from how people usually engage with political matters,” he mentions.

Despite this, De Vries notes growing evidence that individuals are indeed turning to AI chatbots for political advice. A recent survey of over 1,000 voters in the Netherlands ahead of the 2025 national elections found that about 10% sought AI guidance regarding candidates, political parties, and election matters. “This trend is particularly noteworthy as the elections approach,” De Vries points out.

Even if people’s engagements with chatbots are brief, De Vries asserts that the integration of AI into political processes seems unavoidable, as politicians seek tools for policy recommendations or as AI generates political advertisements. “As researchers and as a society, we must recognize that generative AI is now a vital aspect of the electoral process,” he states.

Topics:

  • artificial intelligence/
  • US election

Source: www.newscientist.com

Tuning Out Digital Noise: The True Sound of Democracy in Crisis | Raphael Bear

rDuring my holiday, I would emerge and introduce myself as “offline.” A more precise answer would be France, where internet access is indeed available. However, I intentionally limited my usage. Constantly checking your phone undermines the entire purpose of escaping.

In the last decade, the idea of a vacation has shifted to signify a break from the digital world rather than simply leaving home. The respite begins with logging off, rather than with boarding a flight; decluttering work emails, archiving professional WhatsApp chats, and removing social media apps signify that transition.

Gains don’t manifest immediately. The dissonance may echo in your mind for days before you finally sense a deeper peace, marking a shift in rhythm. It’s a stark contrast between navigating the internal currents of your thoughts and the relentless rush driven by societal demands. The difference grows sharper, and upon returning to work, you find yourself inundated with notifications and alerts.

I’m not advocating for the analog past. You won’t find me lost without Google Maps. I don’t believe that society was better off when the clergy held absolute authority or when people were more susceptible to superstitions.

We are now entering the third decade of the first digital century. The revolution is irreversible, and we tread into uncharted territory. History has seen explosive advancements in connectivity due to significant innovations in communication technology, but only a few have been truly transformative. Naomi Alderman refers to these shifts as an “Information crisis,” claiming we are experiencing the third iteration, with the printing press representing the second.

While comparisons may not be exact, the scale of our current experiences is immense and immeasurable. We have little understanding of just how far we have traveled down this digital path. AI is only in its infancy.

Readers of the Gutenberg Bible in the late 15th century had no means of predicting how movable type would revolutionize social, cultural, economic, and political frameworks in Europe. Are we better prepared to envision the world another century of digital transformation will bring?

My brief two-week hiatus from technology didn’t yield significant insights, but it highlighted that these ongoing transformations are profoundly cognitive. When you step away from the constant stream of information or dial it back, you come to appreciate the chaos of the rest of the time.

Our brains have evolved to interpret a relatively small dataset from our immediate environments, tracking predators and figuring out survival strategies. We are organic processors. Yet, the capacity for rational judgment can be constrained by how quickly sensory signals can be transformed into coherent mental models, determining our subsequent actions.


The app store has a “giant interference” to remove porn for profit.


While we possess remarkable abilities, they often falter under conditions of sensory overload. This doesn’t negate our capacity to process the profound shifts we’ve undergone or our ability to perceive the world at a rapid pace. Just as we adapted to urban life after centuries of rural existence, we can adjust to the influx of stimuli. However, these swift changes can be turbulent, stressful, and often violent. This information crisis induces a cosmological shift, altering how humanity organizes and views itself. Hierarchies crumble, societal norms are rewritten, and morality is reevaluated. A new philosophy is emerging; traditional beliefs are being discarded.

It’s no wonder that democratic systems struggle to adjust in this upheaval. A lone Congress on a small island in the North Atlantic is moving to impose new global tech regulations.

During my brief internet hiatus last month, key provisions of the Online Safety Act came into effect. Social media platforms and search engines are now required to restrict access to content deemed harmful by new legislation for minors (including content promoting abuse, pornography, self-harm, terrorism, and suicide).

Tech companies are lobbying vigorously for change. Donald Trump’s administration views it as an infringement on free speech. Nigel Farage concurs, threatening to repeal the law if he comes to power. The Labor Minister has accused the reform leader of siding with the interests of pedophiles.

As a compliant adult user, it’s difficult to ascertain whether these new restrictions will achieve their intended objectives. The aging process is no more concerning than the daily personal data submissions we make in exchange for a seamless digital experience.

Reports suggest that non-pornographic news and public health sites have been inadvertently blocked. Critics argue that these protections can be easily circumvented with minimal digital savvy. It appears that an overzealous approach by tech companies or a lack of risk management is leading to inconsistent filtering. However, the implications for political freedom – the potential hindrance to free speech that some equate with extreme censorship – seem negligible.


Indeed, the framework for monitoring information involves the potential for a more oppressive agenda. Future administrations could redefine what constitutes “harmful” content, which could include government criticism or anything undermining traditional family values. Advocates of the new legislation should be wary of its potential misuse.

Yet, some of its most ardent opponents, particularly those aligned with Trump, are hardly trustworthy defenders of political freedom. Their motives stem not from a genuine concern for free speech but from the commercial interests of entities overseeing much of our digital information landscape. The system is riddled with toxicity, with those profiting from the chaos refusing accountability, resisting regulation for the same reasons encountered by polluters since the Industrial Revolution: simply because they can. It’s more profitable when they don’t have to clean up their own mess.

Online safety regulations may have flaws, yet they might also be necessary. Currently, it’s a minor skirmish in a broader battle that will determine how power dynamics shift in the wake of the ongoing information crisis. It sends a subtle but vital message: a cry for help from politicians struggling amidst digital chaos.

Source: www.theguardian.com

The Impact of Government AI Usage on Democracy

AI can streamline government paperwork, yet significant risks exist

Brett Hondow / Alamy

A number of nations are exploring how artificial intelligence might assist with various tasks, ranging from tax processing to decisions about welfare benefits. Nonetheless, research indicates that citizens are not as optimistic as their governments, potentially jeopardizing democratic integrity.

“Focusing exclusively on immediate efficiency and appealing technologies could provoke public backlash and lead to a long-term erosion of trust and legitimacy in democratic systems,” states Alexander Utzke, at Ludwig Maximilian University in Munich, Germany.

Utzke and his team surveyed around 1,200 individuals in the UK to gauge their perceptions regarding whether human or AI management was preferable for government functions. These scenarios included handling tax returns, making welfare application decisions, and assessing whether a defendant should be granted bail.

Participants were divided; some learned only about AI’s potential to enhance governmental efficiency, while others were informed about both the advantages and the associated risks. The risks highlighted included the challenges in discerning how AI makes decisions, an increasing governmental reliance on AI that may be detrimental in the long run, and the absence of a straightforward method for citizens to challenge or modify AI determinations.

When participants became aware of these AI-related risks, there was a marked decline in their trust towards the government and an increased feeling of losing control. For instance, the percentage of those who felt government democratic control was diminishing rose from 45% to over 81% when scenarios depicted increasing governmental dependence on AI for specific functions.

After learning about the risks, the percentage of individuals expressing skepticism regarding government use of AI surged significantly. It jumped from under 20% in the baseline scenario to over 65% when participants were informed of both the benefits and risks of AI in the public sector.

Regardless of these findings, democratic governments assert that AI can be utilized responsibly to uphold public trust, according to Hannah Key de la Vallee from the Center for Democracy and Technology in Washington, DC. However, she notes that there have been few successful applications of AI in governance to date, with several instances of failures already observed, which can have serious consequences.

For instance, attempts by various US states to automate public interest claim processing have resulted in tens of thousands of individuals being incorrectly charged with fraud. Some affected individuals faced bankruptcy or lost their homes. “Mistakes made by the government can have significant, long-lasting repercussions,” warns Quay de la Vallee.

Topics:

  • artificial intelligence/
  • government

Source: www.newscientist.com

Rebuilding democracy: unleashing the true power of the people

Many of us entered this so-called super-election year with a sense of foreboding. So far, not much has happened to allay these fears. Russia’s war against Ukraine has exacerbated the perception that democracy is under threat in Europe and beyond. In the United States, presidential candidate Donald Trump self-proclaimed dictatorial tendenciesfacing two assassination attempts. And more broadly, people seem to be losing faith in politics. A 2024 report from the International Institute for Democracy and Electoral Assistance states that “most citizens in diverse countries around the world have no confidence in the performance of their political institutions.”

By many objective measures, democracy is not functioning as it should. The systems we call democracies tend to favor the wealthy. Political violence is on the rise, legislative gridlock is severe, and elections are becoming less free and fair around the world. Nearly 30 years have passed since pundits proclaimed the triumph of Western liberal democracy, but their predictions seem further away than ever from coming true. what happened?

According to Rex Paulson At the Mohammed VI Institute of Technology in Rabat, Morocco, we have lost sight of what democracy is. “We have created a terrible confusion between the system known as a republic, which relies on elections, political parties, and a permanent ruling class, and the system known as democracy, where the people directly participate in decisions and change power. The good news, he says, is that the original dream of government by the people and for the people can be revived. That’s what he and other researchers are trying to do…

Source: www.newscientist.com

Is the influence of digital technologies on voters compromising democracy?

A monster looms, threatening our democracy. The monster comes in many forms, from online misinformation networks and deepfakes, to social media bots and psychological microtargeting that uses our personal data to customize political messages to our interests, attitudes and demographics.

Considering that roughly half of the world's population will go to the polls in 2024, democracy may seem to be in good health. But many worry that it is being undermined by powerful new digital technologies that can target individuals, manipulate voters, and influence elections. Fears about digital influence stem in part from the novelty of the technology. We're still so new to the online age that no one fully understands what's going on, much less what's coming. Every new technology is unfamiliar, and it can sometimes feel like the rules of the game are being rewritten. But are these concerns justified?

We are one of a growing number of researchers with expertise in political science and psychology who are trying to drag these monsters out of the shadows. Our research aims to shed light on how new technologies are being used, by whom, and how effective they are as tools of propaganda. By carefully defining the concept of digital manipulation, we can better understand than ever the threat it poses to democracy. While some lobbying groups loudly warn about its dangers, our research points to more surprising conclusions. Moreover, our findings can help society better prepare to confront digital demons, by telling us exactly what we should worry about and what are just ghosts of our imagination.

In the UK in 2010…

Source: www.newscientist.com

Tech Giants’ Disregard for Democracy Seen in Resistance to Delivery Drones | by John Norton

vinegarFlip digital capitalists over and you find technological determinists: people who believe technology drives history. These individuals view themselves as agents of what Joseph Schumpeter famously called “creative destruction.” They take pleasure in “moving fast and breaking things,” a phrase once used by Facebook founder Mark Zuckerberg, until their representatives convince them that this approach is not ideal, not only because it means taxpayers will bear the consequences.

Technological determinism is, in fact, an ideology that influences your thoughts even when you’re not consciously aware of it. It thrives on a narrative that argues: Technical necessity Whether we agree or not, this narrative suggests that new innovations will continue to emerge. LM Sacasas explains “Every claim of inevitability serves a purpose, and narratives of technological inevitability serve as a convenient shield for tech companies to achieve their desired outcomes, minimize opposition, and persuade consumers that they are embracing a future that may not be desirable but is deemed necessary.”

However, for this narrative of inevitability to resonate with the general public and result in widespread adoption of the technology, politicians must eventually endorse it as well. This scenario is currently observable with AI, although the long-term implications remain unclear. Yet, some indications are troubling, like the cringe-worthy video incidents involving Rishi Sunak’s fawning over the world’s wealthiest individual, Elon Musk, and Tony Blair’s recent heartfelt conversation aired on TV with Demis Hassabis, the well-known co-founder of Google DeepMind.

It’s refreshing to encounter an article that explores the clash between deterministic myths and democratic realities, as seen in “Resisting Technological Inevitability: Google Wing Delivery Drones and the Battle for Our Skies.” Noteworthy academic papers soon to be published in Philosophical Transactions of the Royal Society A, a reputable journal. Written by Anna Zenz from the University of Western Australia’s School of Law and Julia Powles from the Technology & Policy Lab, the paper recounts the narrative of how major tech firms attempted to dominate a new market with a promising technology – delivery drones – without considering the societal repercussions. It reflects how a proactive, resourceful, and determined public successfully thwarted this corporate agenda.

The company in question is Wing, a subsidiary of Google’s parent company Alphabet. Their objective is to develop delivery drones to facilitate the transportation of various goods, including emergency medical aid, creating a new commercial industry that enables broad access to the skies. This is evident in Australia, which hosts Google’s largest drone operation in terms of deliveries and customer outreach. It is endorsed by both state and federal governments, with the federal government taking the lead.

Zenz and Powles argue that by persuading Australian politicians to allow the testing of an Aerial Deliveroo-like service (under the guise of an “experimental” initiative), Google heavily relied on the myth of inevitability. Officials who already believed in the inevitability of delivery drones saw the potential benefits of embracing this trend and offered their support, either passively or actively. The company then leveraged the perception of inevitability to obtain “community acceptance,” manipulating the public into silence or passive tolerance by claiming that delivery drones were an inevitable progression.

One of the test sites for this project was Bonython, a Canberra suburb where the trial commenced in July 2018. However, the project faced immediate challenges. Numerous residents were perturbed and bewildered by the sudden appearance of drones in their neighborhood. They expressed outrage over the drones’ impact on their community, local wildlife, and the environment, citing issues like unplanned landings, dropped cargo, drones flying near traffic, and birds attacking and disrupting the drones.

While many communities might have simply grumbled and overlooked these issues, Bonython took a different approach. A group of proactive residents, including a retired aviation law expert, established a dedicated online presence, distributed newsletters, conducted door-to-door outreach, engaged with politicians, contacted media outlets, and submitted information requests to local authorities.

Their efforts paid off eventually. In August 2023, Wing quietly announced the termination of operations in the Canberra region. This decision not only marked the end of the project but also triggered a congressional inquiry into drone delivery systems, scrutinizing various aspects such as pilot training, economic implications, regulatory oversight, and environmental impacts of drone delivery. This investigation shed light on the blind acceptance of the myth of inevitability among public officials, prompting critical questions that regulators and governments should consistently pose when tech companies champion “innovation” and “progress.”

Echoing Marshall McLuhan’s sentiments in a different context, it’s crucial to acknowledge that “there is absolutely no inevitability if there is a willingness to reflect on unfolding events.” Public resistance against the myth of inevitability should always be encouraged.

Skip Newsletter Promotions

What I’m Reading

The Thinker’s Work
There are fascinating essays in New Statesman about John Gray’s exploration of Friedrich Hayek, one of the 20th century’s most enigmatic thinkers.

Turn the page
Feeling pessimistic? Check out what Henry Oliver has to say in this insightful essay.

A whole new world
Science fiction writer Karl Schroeder shares some provocative blog posts contemplating the future.

Source: www.theguardian.com

Silicon Valley Trump supporters rally behind the decline of democracy | John Norton

I
yeah How does democracy end?In his elegant book, The Restoration of Liberal Democracy, published after Trump’s 2016 election, David Runciman made a startling point: the liberal democracies we take for granted will not last forever, but they will not fail in the ways we’ve seen them in the past: without revolution, military coup, or breakdown of social order. Moving forward through failure In an unexpected way. The implication was that people who compare it to what happened in Germany in the 1930s are mistaken.

Until a few weeks ago, that seemed like wise advice. But then something changed: key sectors of Silicon Valley, a Democratic stronghold for decades, began to support Trump. In 2016, contrarian billionaire and PayPal co-founder Peter Thiel was the only prominent Silicon Valley figure to endorse Trump, which merely confirmed the fact that he was a Silicon Valley legal outcast. But in recent weeks, many of Silicon Valley’s bigwigs (Elon Musk, Marc Andreessen, and David Sachs, to name just three) have revealed themselves as Trump supporters and donors. Musk has set up a pro-Republican political action committee (super PAC) and is donating to it. On June 6, venture capitalist Sachs hosted a $300,000-a-person fundraising dinner at Trump’s San Francisco mansion.

Why the sudden interest in politics? It’s probably a combination of several factors. First, Biden’s billionaire tax plan (and his administration’s antitrust litigation enthusiasm). Second, Trump’s newfound enthusiasm for cryptocurrency. Third, Biden has raised far more money for his campaign. And finally, and most importantly, Trump’s momentum was beginning to look unstoppable even before Biden dropped out.

The last two factors are reminiscent of the 1930s. In 1932, the Nazi Party was in serious financial trouble, and when Hitler became chancellor the following year, he personally appealed to business leaders for help. Funds were raised from 17 different business groups, with the largest donation coming from
IG Farben and Deutsche Bank
At the time, these donations must have seemed like a shrewd gamble to the businessmen who donated them. But as historian Adam Tooze wrote in his landmark book on the period, it also meant that German businessmen “were willing to cooperate in the destruction of German political pluralism.” In return, according to Tooze, German business owners and managers were given unprecedented powers to control their employees, collective bargaining was abolished, and wages were frozen at relatively low levels. Corporate profits and business investment grew rapidly. Fascism had been good for business, but it wasn’t anymore.

I wonder if these thoughts were going through the minds of the tech titans enjoying a $300,000 dinner in San Francisco that June night. My guess is no, they’re not. Silicon Valley residents don’t care much about history because they’re in the business of creating the future, so there’s nothing to learn from the past.

That’s a pity, because history has some lessons for them. The German businessmen who decided to support Hitler in 1933 may not have known exactly what he was up to for Germany, and probably knew nothing about the plans for the “Final Solution.” But David Sachs’ dinner guests have no such excuse.
Project 2025
President Trump’s second term plans are available online in a 900-page document.

It’s an interesting read. It has four core objectives: protecting children and families, dismantling the administrative state, defending borders, and restoring “God-given” individual liberties. But essentially,
A huge expansion of presidential powers There are many hysterical proposals, including putting the Department of Justice under Presidential control, replacing nonpartisan civil servants with loyalist ones, rolling back environmental laws, mass deportations, and removing “sexual orientation and gender identity, diversity, equity and inclusion, gender, gender equality, gender equity, gender sensitivity, abortion, reproductive health and reproductive rights” from all federal rules, agency regulations, contracts, grants and laws.

The rationale for Project 2025 was a concern that Trump had no idea how to use his new powers when he came to power in 2016, and that he certainly will not do so next time. As public concern about the document has grown, he has tried to distance himself from it. This may be because he thinks he won’t need a plan if elected. Speaking recently at a Christian convention in Florida, he said: “Go out and vote, this time. You don’t have to vote anymore. Four more years and we’ll take care of it. We’ll all be sorted out. My beautiful Christian people, you don’t have to vote anymore.”

The lesson? Be careful what you wish for. Copycats, Silicon Valley.

Skip Newsletter Promotions

What I’m Reading


Where to start?
Tim Harford said:
How do we fix Britain? Here’s how” in Financial Times.

False balance
There’s a thoughtful Substack by historian Timothy Snyder.
Two-sidednessThe harmful delusions of mainstream media.

In the Ether
In a skeptical blog post in Molly White’s newsletter, Citation Needed, she writes:
When cryptocurrency policy becomes an election issue.

Source: www.theguardian.com