Russia has been attempting online fraudulent activities using generative artificial intelligence, but according to a Metasecurity report published on Thursday, these efforts have not been successful.
Meta, the parent company of Facebook and Instagram, discovered that AI-powered strategies have only brought minimal benefits in terms of productivity and content generation to malicious actors. Meta was successful in thwarting deceptive influence campaigns.
Meta’s actions against “systematic fraud” on its platform are in response to concerns that generative AI could be employed to mislead or confuse individuals during elections in the U.S. and other nations.
David Agranovich, Meta’s director of security policy, informed reporters that Russia continues to be the primary source of “coordinated illicit activity” using fake Facebook and Instagram accounts.
Since the 2022 invasion of Ukraine by Russia, these efforts have been aimed at weakening Ukraine and its allies, as outlined in the report.
With the upcoming U.S. election, Meta anticipates Russian-backed online fraud campaigns targeting political candidates who support Ukraine.
Facebook has faced accusations of being a platform for election disinformation, while Russian operatives have utilized it and other U.S.-based social media platforms to fuel political tensions during various U.S. elections, including the 2016 election won by Donald Trump.
Experts are worried that generative AI tools like ChatGPT and Dall-E image generator can rapidly create on-demand content, leading to a flood of disinformation on social networks by malicious actors.
The report notes the use of AI in producing images, videos, translating and generating text, and crafting fake news articles and summaries.
When Meta investigates fraudulent activity, the focus is on account behavior rather than posted content.
Influence campaigns span across various online platforms, with Meta observing that X (formerly Twitter) posts are used to lend credibility to fabricated content. Meta shared its findings with X and other internet companies, emphasizing the need for a coordinated defense against misinformation.
When asked about Meta’s view on X addressing scam reports, Agranovic mentioned, “With regards to Twitter (X), we’re still in the process of transitioning. Many people we’ve dealt with there in the past have already gone elsewhere.”
X has disbanded its trust and safety team and reduced content moderation efforts previously used to combat misinformation, making it a breeding ground for disinformation according to researchers.
Source: www.theguardian.com