Concerns Grow That X’s AI Fact-Checkers May Undermine Efforts Against Conspiracy Theories

The decision by Elon Musk’s X social media platform to register artificial intelligence chatbots for creating FactChecks might inadvertently promote “lies and conspiracy theories,” warns a former UK technology minister.

Damian Collins criticized X for “leaving it to the bot to edit the news,” following the announcement that it would permit a large-scale language model to clarify or alter community notes before user approval. Previously, notes were written solely by humans.

X revealed that it plans to utilize AI for drafting FactChecking notes, asserting in a statement, “We are at the forefront of enhancing information quality on the Internet.”

Keith Coleman, Product Vice Chairman of X, mentioned that the notes would only be shown after human reviewers assess AI-generated content, ensuring usefulness from varied perspectives.

“We designed the pilot to operate as human-assisted AI. We believe it can offer both quality and reliability. We also released a paper alongside the pilot’s launch, co-authored by professors and researchers from MIT, Washington University, Harvard University, and Stanford, detailing why this blend of AI and human involvement is promising.”

However, Collins pointed out that the system is prone to abuse, with AI agents handling community notes potentially enabling “industrial manipulation that users may trust” on a platform boasting around 600 million users.

This move represents the latest challenge to human fact checkers by US tech firms. Last month, Google stated that user-created FactChecks would degrade search results, including those from professional fact-checking organizations, asserting that such checks “no longer provide significant additional value to users.” In January, Meta announced its intention to phase out American human fact checkers and replace them with its own community notes system across Instagram, Facebook, and Threads.

An X research paper describing the new fact-checking system claims that specialized fact checks are often limited in scale and lack the trust of the general public.

An AI-generated community note asserts that “rapid production requires minimal effort while maintaining high-quality potential.” Both human and AI-created notes will enter the same pool, ensuring that the most useful content appears on the platform.

According to the research paper, AI will generate a “summary of neutral evidence.” Trust in community notes, the paper states, “stems from those who evaluate them, not those who draft them.”

Andy Dudfield, leading AI at the UK fact-checking organization Full Fact, commented: “These plans will add to the existing significant workload for human reviewers, raising valid concerns about the adequacy of AI-generated content that lacks thorough drafting, review, and consideration.”

Samuel Stockwell, a researcher at the Alan Turing Institute’s Emerging Technology Security Center, noted: “AI can assist fact checkers in managing the vast array of claims that circulate daily on social media, but it hinges on the quality of X, which risks the chance that these AI ‘note writers’ will mislead users with false or dubious narratives. Even when inaccuracies arise, the confident delivery can deceive viewers.”

Research indicates that individuals view human-generated community notes as significantly more reliable than a simple misinformation flag.

An analysis of hundreds of misleading posts on X leading up to last year’s presidential election reveals that in three-quarters of cases, accurate community notes were not displayed, nor were they supported by users. These misleading claims, including accusations of Democrats importing illegal voters and the assertion that the 2020 presidential election was stolen, have amassed over 20 billion views, according to a center combating digital hatred.

Source: www.theguardian.com

Fact-checkers react negatively to Meta’s decision to transition to a scrappy role

Founder of Facebook
Mark Zuckerberg

His company Meta announced on Tuesday that it would scrap the facts.
He accused the US checkers of making biased decisions and said he wanted greater freedom of speech. Meta uses third-party independent fact checkers from around the world. Here, one of them, who works at the Full Fact organization in London, explains what they do and their reaction to Zuckerberg’s “mind-boggling” claims.

I was a fact checker at Full Fact in London for a year, investigating questionable content on Facebook, X and newspapers. Our diet is filled with disinformation videos about wars in the Middle East and Ukraine, as well as fake AI-generated video clips of politicians, which are becoming increasingly difficult to disprove. There is. Colleagues are tackling coronavirus disinformation, misinformation about cancer treatments, and there’s a lot of climate-related talk as there are more hurricanes and wildfires.

As soon as you log on at 9am, you’re assigned something to watch. By accessing Meta’s system, you can see which posts are most likely to be false. In some cases, there may be 10 or 15 potentially harmful things and it can be overwhelming. But you can’t check everything.

If a post is a little wild but not harmful, like this AI-generated image of the Pope wearing a giant white puffer coat, we might leave it. But if it’s a fake image of Mike Tyson holding a Palestinian flag, we’re more likely to address it. We propose them in the morning meeting and are then asked to start checking.

Yesterday I was working on a deepfake video in which Keir Starmer said many of the claims about Jimmy Savile were frivolous and that was why he was not prosecuted at the time. We’re getting a lot of engagement. Starmer’s mouth did not look right and did not appear to say anything. It seemed like a false alarm. I immediately started doing a reverse image search and discovered that the video was taken from the Guardian newspaper in 2012. The original was of much higher quality. The area around his mouth is very blurry and you can see exactly what he’s saying when you compare it to what he shares on social media. We contacted the Guardian for comment on the original Downing Street. You can also get in touch with various media forensics and deepfake AI experts.

Some misinformation continues to resurface. There is a particular video of a gas station explosion in Yemen last year that has been reused as either a bombing in Gaza or a Hezbollah attack on Israel.

Fact checkers collect examples of how that information has appeared on social media in the past 24 hours or so, often times like the number of likes or shares, and how do they know when it’s incorrect? indicates.

Attaching fact checks to Facebook posts requires two levels of review. Senior colleagues question every leap in logic we make. For recurring claims, this process can be completed in half a day. New, more complex cases may take closer to a week. The average is about 1 day. It can be frustrating to go back and forth at times, but you want to be as close to 100% sure as possible.

It was very difficult to hear Mark Zuckerberg say that fact checkers are biased on Tuesday. Much of the work we do is about being fair, and that’s instilled in us. I feel it is a very important job to bring about change and provide good information to people.

This is something I wanted to do in my previous job in local journalism, go down rabbit holes and track down sources, but I didn’t have many opportunities. It was very Churnalism. As a local reporter, I was concerned and felt helpless at the amount of conspiracy theories people were seriously engaging with and believing in Facebook groups.

At the end of the day, it can be difficult to switch off. I’m still thinking about how to prove something as quickly as possible. When I see things like content stock prices constantly going up, I get a little worried. But when a fact check is published, there is a sense of satisfaction.

Zuckerberg’s decision was unfortunate. We put a lot of effort into this and we think it’s really important. But we renew our resolve to fight the good fight. Misinformation will never go away. We will continue to be here and fight against it.

Source: www.theguardian.com

Taiwanese fact-checkers combat Chinese disinformation and ‘unstoppable’ AI, transitioning from beef noodles to bots

CHarless Yeh’s fight against disinformation in Taiwan started with a bowl of beef noodles. It all began nine years ago when the Taiwanese engineer was dining at a restaurant with his family. His mother-in-law began removing scallions from his dish, claiming they were bad for the liver based on a text message she had received. This prompted Yeh to investigate and reveal the truth.

Confused by the misinformation, Yeh decided to expose the truth on his blog and share it with his family and friends via the Line messaging app. The information quickly spread, leading to requests from strangers who wanted to connect with his personal Line account.

Yeh recognized the demand for fact-checking in Taiwan, leading him to launch the website “MyGoPen” in 2015, which translates to “Don’t be fooled again” in Taiwanese. Within two years, MyGoPen gained 50,000 subscribers and now boasts over 400,000. In 2023, the platform received 1.3 million fact-check requests, debunking various myths and false claims.

Several other fact-checking organizations have also emerged in Taiwan, including the Taiwan Fact-Checking Centre, Cofacts, and DoubleThink Lab. However, as these organizations grow, the threat of disinformation also increases.

The growing and changing threat from China

A study by the Democratic Diversity Project at the University of Gothenburg identified Taiwan as the target of foreign disinformation more than any other democracy, with the most significant threat originating from across the Taiwan Strait, particularly during election seasons.

Doublethink Lab monitors China’s influence in various spheres across 82 countries, ranking Taiwan at the top for China’s impact on society and media and 11th place overall.

Despite the increasing threats, Yeh and his team at MyGoPen continue to combat disinformation using a combination of human fact-checkers and AI. They leverage advanced technologies to verify information and educate the public about evolving disinformation tactics.

Source: www.theguardian.com