“How awful!”
Gail Huntley picked up the phone and immediately recognized Joe Biden's raspy voice. Huntley, a 73-year-old New Hampshire resident, had planned to vote for the president in the state's upcoming primary and was perplexed when she received a prerecorded message urging her not to vote.
“It's important to save your vote for the November election,” the message said. “Only this Tuesday's vote will allow the Republican Party to seek re-election of Donald Trump.”
Huntley quickly realized the call was fake, but thought Biden's words had been taken out of context. She was shocked when it was revealed that the recording was generated by AI. Within weeks, the United States outlawed robocalls that use AI-generated voices.
The Biden deepfake was the first major test for governments, tech companies, and civil society groups. Governments, technology companies and civil society organizations are grappling with how best to police an information ecosystem where anyone can create photorealistic images of candidates or replicate their voices. It is embroiled in a heated debate. Terrifying accuracy.
As citizens of dozens of countries, including the US, India and possibly the UK, go to the polls in 2024, experts say democratic processes are at serious risk of being disrupted by artificial intelligence. .
AI fakes are already being used in elections Slovakia,Taiwan, Indonesiaand they are thrown into an environment where trust in politicians, institutions and media is already low.
Watchdog groups have warned that more than 40,000 people have been laid off at the tech companies that host and manage much of this content, and that digital media is uniquely vulnerable to abuse.
Mission Impossible?
For Biden, concerns about the potentially dangerous uses of AI spiked after watching the latest Mission: Impossible movie. Over the weekend at Camp David, the president relaxed in front of a movie in which Tom Cruise's Ethan Hunt takes on a rogue AI.
After watching the film, White House Deputy Chief of Staff Bruce Reid said that if Biden wasn't already concerned about what could go wrong with AI, “he has much more to worry about.” It turns out there are a lot of them.”
Since then, Biden has signed an executive order requiring major AI developers to share safety test results and other information with the government.
And the United States is not alone in taking action. The EU is about to pass one of the most comprehensive laws to regulate AI, but it won't come into force until 2026. Proposed regulations in the UK have been criticized for moving too slowly.
But because the United States is home to many of the most innovative technology companies, the White House's actions will have a major impact on how the most disruptive AI products are developed.
Katie Harvath, who spent a decade helping shape policy at Facebook and now works on trust and safety issues at tech companies, says the U.S. government isn't doing enough. Concerns about stifling innovation could play into this, especially as China moves to develop its own AI industry, she says.
Harvath discusses how information systems have evolved from the “golden age” of social media growth, to the Great Reckoning after the Brexit and Trump votes, and the subsequent efforts to stay ahead of disinformation. I watched what happened from my ringside seat.
Her mantra for 2024 is “panic responsibly.”
In the short term, she says, the regulators and polices for AI-generated content will be the very companies developing the tools to create it.
“I don't know if companies are ready,” Harvath said. “There are also new platforms whose first real test will be this election season.”
Last week, major tech companies signed an agreement to voluntarily adopt “reasonable precautions” to prevent AI from being used to disrupt democratic elections around the world, and to coordinate efforts. We took a big step.
Signatories include OpenAI, the creator of ChatGPT, as well as Google, Adobe, and Microsoft, all of which have launched tools to generate AI-authored content. Many companies have also updated their own rules to prohibit the use of their products in political campaigns.. Enforcing these bans is another matter.
OpenAI, which uses its powerful Dall-E software to create photorealistic images, said its tool rejects requests to generate images of real people, including candidates.
Midjourney, whose AI image generation is considered by many to be the most powerful and accurate, says users should not use the product to “attempt to influence the outcome of a political campaign or election.” Says.
Midjourney CEO David Holtz said the company is close to banning political images, including photos of leading presidential candidates. It appears that some changes are already in effect. When the Guardian asked Midjourney to produce an image of Joe Biden and Donald Trump in a boxing ring, the request was denied, saying it violated the company's community standards. A flag was raised.
But when I entered the same prompt, replacing Biden and Trump with British Prime Minister Rishi Sunak and opposition leader Keir Starmer, the software produced a series of images without a problem.
This example is at the center of concerns among many policymakers about how effectively tech companies are regulating AI-generated content outside the hothouse of the U.S. presidential election.
“Multi-million euro weapons of mass operation”
Despite OpenAI's ban on using its tools in political campaigns, its products were used to create campaign art, track social media sentiment, build interactive chatbots, and engage voters in Indonesia's elections this month. Reuters reported that it was widely used as a target.
Harvath said it's an open question how startups like OpenAI can aggressively enforce their policies outside the United States.
“Each country is a little different, with different laws and cultural norms. When you run a US-focused company, you realize that things work differently in the US than they do in other parts of the world. can be difficult.”
Last year's national elections in Slovakia pitted pro-Russian candidates against those advocating stronger ties with the EU. Ballot papers include support for Ukraine's war effort, and EU officials say the vote could be at risk of interference by Russia and its “multi-million euro weapons of mass manipulation” emphasized by those.
As the election approached and a national media blackout began, an audio recording of pro-EU candidate Michal Šimeka was posted on Facebook.
In the recording, Simechka appears to discuss ways to rig elections by buying votes from marginalized communities. The audio was fake, and AFP news agency reported that it appeared to have been manipulated using AI.
However, media outlets and politicians are required to remain silent under election concealment laws, making it nearly impossible to uncover errors in the recording.
The doctored audio appears to have fallen through a loophole in how Facebook owner Meta Inc. polices AI-generated material on its platform.below it community standardsprohibits posting content that has been manipulated in a way that “the average person wouldn't understand,” or that has been edited to make someone say something they didn't say. However, this only applies to videos.
Pro-Russian candidate Robert Fico won the election and became prime minister.
When will we know that the future is here?
Despite the dangers, there are some signs that voters are better prepared for what's to come than officials think.
“Voters are smarter than we think,” Harvath said. “They may be overwhelmed, but they understand what's going on in the information environment.”
For many experts, the main concern is not the technologies we are already working on, but the innovations that are on the other side of the horizon.
Writing in MIT's Technology Review, academics said the public debate about how AI threatens democracy is “lacking imagination.” The real danger, they say, is not what we already fear, but what we cannot yet imagine.
“What rocks are we not examining?” Halvath asks. “New technologies emerge, new bad guys emerge. There are constant high and low tides, and we have to get used to living with them.”
Source: www.theguardian.com