Following Joe Biden’s announcement of not seeking reelection, misinformation surfaced online regarding the potential for a new candidate to assume the presidency.
Screenshots claiming nine states couldn’t add new candidates to the ballot quickly went viral on Twitter (now X) and were widely viewed. The Minnesota Secretary of State’s office received requests to fact-check these posts which turned out to be completely false as the voting deadline had not passed and Kamala Harris had ample time to be added to the ballot.
The misinformation originated from Twitter’s chatbot Grok, which provided an incorrect response when asked if new candidates could still be added to the ballot.
This incident served as a test case for the interaction between election officials and artificial intelligence companies in the 2024 US presidential election, amid concerns that AI could mislead or distract voters. It also highlighted the potential role Grok could play as a chatbot lacking strict guardrails to prevent the generation of inflammatory content.
A group of secretaries of state and the National Association of Secretaries of State contacted Grok and X to report the misinformation. Initial attempts to correct it were ineffective, prompting Minnesota Secretary of State Steve Simon to express disappointment at the lack of action.
While the impact of the misinformation was relatively minor, prompting no hindrance to voting, the secretaries of state took a strong stance to prevent similar incidents in the future.
The secretaries launched a public effort by signing an open letter to Grok’s owner, Elon Musk, urging the chatbot to redirect election-related queries to trusted sources like CanIVote.org. Their efforts led to Grok now directing users to vote.gov when asked about the election.
Simon praised the company for eventually taking responsible action and emphasized the importance of early and consistent debunking of misinformation to maintain credibility and prompt corrective responses.
Despite initial setbacks, Grok’s redirection of users and Musk’s philosophy against centralized control offer hope for combating misinformation. It is critical to prevent AI tools like Grok from further exacerbating partisan divisions or spreading inaccurate information.
The potential for paid subscriptions and widespread usage of Grok integrated into social media platforms poses challenges in addressing the risk of deceptive content creation. Efforts to address and rectify misinformation are crucial in safeguarding the integrity of elections and ensuring responsible use of AI-based tools.
Source: www.theguardian.com