Meta has disclosed that it intervened this year to stop around 20 covert influence operations globally. However, the company mentioned that concerns regarding AI-based election distortions may not be realized until 2024.
Nick Clegg, the president of international affairs at Meta, which oversees Facebook, Instagram, and WhatsApp, stated that Russia continues to be the main source of hostile online activity. He expressed surprise that AI has not been utilized to deceive voters during recent busy election periods globally.
The former British deputy prime minister mentioned that Meta, with over 3 billion users, utilized AI tools to create images of political figures like Donald Trump, Kamala Harris, J.D. Vance, and Joe Biden last month. Over 500,000 requests for such images had to be removed before the American election day.
Security experts at the company have been dealing with new operations using fake accounts to manipulate public debate toward strategic goals every three weeks. These operations include Russian networks targeting countries like Georgia, Armenia, and Azerbaijan.
Another operation based in Russia uses AI to create fake news sites resembling well-known brands to weaken support for Ukraine and promote Russia’s role in Africa while criticizing African countries and France.
Mr. Clegg highlighted that Russia remains the most frequent source of covert influence operations disrupted, followed by Iran and China. He noted that the impact of AI-generated deceptive content from disinformation campaigns appears to be limited so far.
While the impact of AI manipulation on video, audio, and photos has been modest, Mr. Clegg warned that these tools are likely to become more pervasive in the future, potentially changing the landscape of online content.
In a recent evaluation, the Center for Emerging Technology and Security suggested that AI-generated deceptive content influenced the US election discourse, but evidence of its impact on the election outcome is lacking. The report warns that AI-based threats could negatively affect democratic systems by 2024.
Sam Stockwell, a researcher at the Alan Turing Institute, highlighted how AI tools may have shaped election discourse and spread harmful content subtly, such as misleading claims and rumors that gained traction during recent elections.
Source: www.theguardian.com