According to a study, more than 100 deepfake video ads impersonating Rishi Sunak were paid to promote on Facebook in the last month alone. This study warns of the risks posed by AI ahead of the general election.
The ads may have reached up to 400,000 people, despite potentially violating some of Facebook’s policies. It was the first time a prime minister’s image had been systematically defaced all at once.
Over £12,929 was spent on 143 ads from 23 countries, including the US, Turkey, Malaysia, and the Philippines.
One ad includes a breaking news story in which BBC newsreader Sarah Campbell falsely claims that a scandal has broken out centering on Mr. Sunak. It also includes a fake video that appears to be reading out loud.
The article falsely claims that Elon Musk has launched an application that can “collect” stock market trades and suggests the government should test the application. It includes a fabricated clip of Mr. Sunak saying he has made the decision.
The clip leads to a fake BBC news page promoting fraudulent investments.
The scheme was carried out by Fenimore Harper, the communications company founded by Marcus Beard, a former Downing Street official who was the number 10 head of counter-conspiracy theory during the coronavirus crisis. He warned that this ad, which shows a change in the quality of fakes, shows that this year’s election is at risk of being manipulated by a large amount of high-quality falsehoods generated by AI.
“With the advent of cheap and easy-to-use voice and facial cloning, little knowledge or expertise is required to use a person’s likeness for malicious purposes.”
“Unfortunately, this problem is exacerbated by lax moderation policies for paid ads. These ads violate several of Facebook’s advertising policies. However, few of the ads we found were removed. There was very little.”
Meta, the company that owns Facebook, has been contacted for comment.
A UK government spokesperson said: “We work widely across government, through the Democracy Defense Task Force and dedicated government teams, to ensure we respond quickly to any threats to democratic processes.”
“Our online safety laws go further by creating new requirements for social platforms to quickly remove illegal misinformation and disinformation – even if it is generated by AI – as it becomes aware of it.”
A BBC spokesperson said: “In a world where disinformation is on the rise, we urge everyone to ensure they get their news from trusted sources. We are committed to tackling the growing threat of disinformation. In 2023, we launched BBC Verify to investigate, fact-check, verify video, counter disinformation, analyze data and explain complex stories using a range of forensic and open source intelligence (OSINT) tools. We invest in a highly specialized team with
“We build trust with our viewers by showing them how BBC journalists know the information they report and explaining how to spot fake and deepfake content. When we become aware of fake content, we take swift action.”
Regulators are concerned that time is running out to enact sweeping changes to ensure Britain’s electoral system is ready for advances in artificial intelligence before the next general election, expected to be held in November.
The government continues to consult with regulators, including the Electoral Commission, and under legislation from 2022 there will be new requirements for digital campaign materials to include ‘imprints’, allowing voters to control who spends on advertising. This will ensure that you know who has paid and who is participating in your ads. To influence them.
Source: www.theguardian.com