The decision by Elon Musk’s X social media platform to register artificial intelligence chatbots for creating FactChecks might inadvertently promote “lies and conspiracy theories,” warns a former UK technology minister.
Damian Collins criticized X for “leaving it to the bot to edit the news,” following the announcement that it would permit a large-scale language model to clarify or alter community notes before user approval. Previously, notes were written solely by humans.
X revealed that it plans to utilize AI for drafting FactChecking notes, asserting in a statement, “We are at the forefront of enhancing information quality on the Internet.”
Keith Coleman, Product Vice Chairman of X, mentioned that the notes would only be shown after human reviewers assess AI-generated content, ensuring usefulness from varied perspectives.
“We designed the pilot to operate as human-assisted AI. We believe it can offer both quality and reliability. We also released a paper alongside the pilot’s launch, co-authored by professors and researchers from MIT, Washington University, Harvard University, and Stanford, detailing why this blend of AI and human involvement is promising.”
However, Collins pointed out that the system is prone to abuse, with AI agents handling community notes potentially enabling “industrial manipulation that users may trust” on a platform boasting around 600 million users.
This move represents the latest challenge to human fact checkers by US tech firms. Last month, Google stated that user-created FactChecks would degrade search results, including those from professional fact-checking organizations, asserting that such checks “no longer provide significant additional value to users.” In January, Meta announced its intention to phase out American human fact checkers and replace them with its own community notes system across Instagram, Facebook, and Threads.
An X research paper describing the new fact-checking system claims that specialized fact checks are often limited in scale and lack the trust of the general public.
An AI-generated community note asserts that “rapid production requires minimal effort while maintaining high-quality potential.” Both human and AI-created notes will enter the same pool, ensuring that the most useful content appears on the platform.
According to the research paper, AI will generate a “summary of neutral evidence.” Trust in community notes, the paper states, “stems from those who evaluate them, not those who draft them.”
Andy Dudfield, leading AI at the UK fact-checking organization Full Fact, commented: “These plans will add to the existing significant workload for human reviewers, raising valid concerns about the adequacy of AI-generated content that lacks thorough drafting, review, and consideration.”
Samuel Stockwell, a researcher at the Alan Turing Institute’s Emerging Technology Security Center, noted: “AI can assist fact checkers in managing the vast array of claims that circulate daily on social media, but it hinges on the quality of X, which risks the chance that these AI ‘note writers’ will mislead users with false or dubious narratives. Even when inaccuracies arise, the confident delivery can deceive viewers.”
Research indicates that individuals view human-generated community notes as significantly more reliable than a simple misinformation flag.
An analysis of hundreds of misleading posts on X leading up to last year’s presidential election reveals that in three-quarters of cases, accurate community notes were not displayed, nor were they supported by users. These misleading claims, including accusations of Democrats importing illegal voters and the assertion that the 2020 presidential election was stolen, have amassed over 20 billion views, according to a center combating digital hatred.
Source: www.theguardian.com
