Wearing a crisp blue shirt and speaking with a soft American accent, this well-dressed young man is an unlikely supporter of the military junta leader of the West African nation of Burkina Faso.
“We must…support President Ibrahim Traore…Homeland or death, we must overcome!” he said in a video that began circulating on Telegram in early 2023. Ta. This was just a few months after the dictator took power in a military coup.
Another video starring another person with a similar professional appearance and repeating the exact same script in front of the Burkina Faso flag was released around the same time.
A few days later, on X’s verified account, the same young man in the same blue shirt claimed to be Archie, the CEO of a new cryptocurrency platform.
These videos are fake. These were generated by artificial intelligence (AI) developed by a start-up based in east London. A company called Synthesia has made waves in an industry competing to perfect lifelike AI videos. Investors poured in cash, propelling the company to “unicorn” status, or the status of a privately held company valued at more than $1 billion.
Synthesia’s technology is aimed at clients looking to create marketing materials and internal presentations, and any deepfakes violate its terms of service. But this means little for models whose digital “puppet” has a similar model behind it. Used in propaganda videos apparently supporting the Burkina Faso dictator. The Guardian tracked down five of them.
“I am in shock and have no words right now. [creative] “I’ve been in this industry for over 20 years and I’ve never felt so violated and vulnerable,” said Mark Torres, a London-based creative director who appears in the fake video wearing a blue shirt. spoke.
“I don’t want anyone to look at me that way. Just the fact that my image is out there, the fact that I’m promoting a military regime in a country I didn’t even know about, says something. People will think I’m involved in a coup,'' Torres added after being shown the video for the first time by the Guardian.
of shoot
In the summer of 2022, Connor Yates received a call from an agent offering him the chance to be one of the first AI models at a new company.
Yeates had never heard of the company, but he had just moved to London and was sleeping on a friend’s couch. An offer of nearly £4,000 for a day’s shoot and three years of use of the images felt like a ‘good opportunity’.
“I’ve been modeling since university and that’s been my main income since I graduated. Then I moved to London to start doing stand-up,” said Yates, who grew up in Bath.
Filming took place at Synthesia studios in east London. First, I received hair and makeup instruction. Thirty minutes later, he entered the recording room where a small staff member was waiting.
Yates wore a variety of costumes, including a white coat, a construction high-vis vest and helmet, and a corporate suit, and was asked to read his lines while looking directly into the camera.
“I have a teleprompter in front of me with lines written on it, and when I say it, I can capture the gestures and reproduce the movements. They’ll be more enthusiastic, smiling, grimacing, angry, I would say,” Yates said.
It took 3 hours in total. A few days later, he received a contract and a link to his AI avatar.
“They paid right away. I didn’t have wealthy parents, so I needed the money,” Yates said, but she didn’t think much about it after that.
Like Torres, Yates’ portrait was used in propaganda by Burkina Faso’s current leader.
A Synthesia spokesperson said the company will ban accounts that create videos in 2023, strengthen its content review process, and “employ more content moderators and increase moderation to better detect and prevent abuse of our technology.” “We have improved our automation capabilities and automation systems.” ”.
But neither Torres nor Yates were told about the video until they were contacted by the Guardian a few months ago.
“unicorn”
Synthesia was founded in 2017 by Victor Riparbelli, Steffen Tjerrild, and two academics from London and Munich.
A year later, the company released a dubbing tool that allows production companies to use AI to translate audio and automatically sync actors’ lips.
This was featured on a BBC program where an English-only news presenter was magically made to appear to speak Mandarin, Hindi and Spanish.
It was the company’s pivot to mass-market digital avatar products that are now available that earned it the coveted “unicorn” status. This allows businesses or individuals to create presenter-led videos in minutes for just £23 per month. Choose from dozens of characters with different genders, ages, ethnicities, and appearances. Once selected, the digital doll can be placed in almost any environment, given a script, and read that script in over 120 languages and accents.
Synthesia currently commands a dominant market share, with customers including Ernst & Young (EY), Zoom, Xerox, and Microsoft.
The product’s advancements led Time magazine to include Lipalberg among the 100 most influential people in AI in September.
But the technology has also been used to create videos related to adversaries such as Russia, China, and others to spread misinformation and disinformation. Sources suggested to the Guardian that the Burkina Faso video, which circulated in 2023, was also likely produced by Russian state actors.
personal influence
Around the same time that the Burkina Faso video began circulating online, two pro-Venezuelan videos featuring fake news segments provided by Synthesia avatars also appeared on YouTube and Facebook. In one article, a blond male presenter in a white shirt denounced “Western media claims” about economic insecurity and poverty, instead painting a highly misleading picture of the country’s financial situation.
London-based actor and Synthesia model Dan Dewhurst, whose likeness was used in the video, told the Guardian: He quietly judged me. You may have lost a customer. But that’s not me, it’s just my face. But they would think I agreed with it. ”
“I was furious. It really, really took a toll on my mental health. [It caused] “It’s an overwhelming feeling of anxiety,” he added.
A Synthesia spokesperson said the company is in contact with some of the actors whose likenesses were used. “I sincerely regret that these historic events have had a negative personal or professional impact on the people you spoke to,” he said.
However, once the damage caused by deepfakes has spread, it is difficult to reverse it.
Mr Dewhurst said seeing one’s face used to spread propaganda was the worst-case scenario, adding: “When we’re worried, our brains often go into a catastrophic state. It was really scary to see my fears come true.”
“Roller coaster”
Last year, more than 100,000 unionized actors and performers went on strike in the United States to protest the use of AI in the creative arts. The strike was called off last November after the studios agreed to contractual safeguards, including informed consent before digital reproduction and fair compensation for such use. Video game performers continue to strike over the same issue.
Last month, a bipartisan bill, the NO FAKES Act, was introduced in the United States and aims to make companies and individuals liable for damages for violations involving digital replicas.
However, other than AI-generated sexual content, there is virtually no practical mechanism for helping artists themselves.
“These AI companies are taking people on a really dangerous roller coaster,” said Kelsey Farish, a London-based media and entertainment lawyer who specializes in generative AI and intellectual property. “And guess what? People have been on this roller coaster and now people are starting to get hurt.”
Under the GDPR, models can technically request that Synthesia delete data that includes their likeness or image. In reality this is very difficult.
A former Synthesia employee, who asked to remain anonymous for fear of retribution, explained that AI cannot “unlearn” or remove what it may have gleaned from a model’s body language. To do so, the entire AI model must be replaced.
A Synthesia spokesperson said: “Many of the actors we work with re-engage with us for new shoots… At the beginning of the collaboration, we explain to them the terms of use and how our technology works. We explain how it works and help you understand what the platform can do and the safeguards we have in place. ”
He said the company does not allow “stock avatars to be used for political content, including content that is factually accurate but potentially polarizing,” and that the company’s policy is that avatars are “manipulated.” It said it was designed to prevent it from being used for “competence, deception, impersonation, etc.” False association.”
“Our processes and systems may not be perfect, but our founders are committed to continually improving them.”
When the Guardian tested Synthesia’s technology using various disinformation scripts, attempts to use any of the avatars were blocked, but by recreating Burkina Faso’s propaganda videos with personally created avatars. It was possible to download it, but neither should have been allowed. According to Synthesia’s policies. Synthesia said this was not a violation of its terms, as it respects the right of individuals to express their political views, but later blocked the account.
The Guardian also produced a clip from the audio-only avatar saying “Long live Hitler” in several languages and another audio clip saying “Kamala Harris rigged the election” in an American accent. I was also able to download it.
Synthesia suspended its free AI audio service after being contacted by the Guardian, stating that the technology behind the product was a third-party service.
aftermath
The experience of learning about his likeness was used in a propaganda video. Mr. Torres was left with a deep feeling of betrayal. “It makes me so angry to know that this company that I trusted in my image would get away with something like this. It could cost me my life.”
Torres was invited to do another shoot with Synthesia this year, but he declined. His contract ends in a few months and his Synthesia avatar is removed. But what will his avatar look like in the Burkina Faso video? It’s unclear even to him.
“Now I understand why it’s so dangerous to expose your face to them. It’s a shame that we took part in this,” he said.
YouTube has since removed the propaganda video featuring Dewhurst, but it remains available on Facebook.
Both Torres and Yates remain on the front page of Synthesia’s video ads.
Source: www.theguardian.com