Man Fined $340,000 for Creating Deepfake Porn of a Prominent Australian Woman in Landmark Case

The individual who shared deepfake pornographic images of a well-known Australian figure has been heavily fined in the initial legal case for sending a “strong message.”

On Friday, a federal court mandated that Anthony Rotondo, also known as Antonio, pay a penalty of $343,500 along with legal costs after the online regulator, Esafiti Commissioner, filed a lawsuit against him nearly two years ago.

Rotondo was responsible for posting the images on a website named Mrdeepfakes.com.

Sign up: AU Breaking NewsEmail

Regulators maintained that substantial civil penalties were essential to underscore the severity of violations against online safety laws and the harm inflicted upon women who are victims of image-based abuse.

“This action sends a strong message regarding the repercussions for individuals who engage in image-based abuse through deepfakes,” the watchdog stated late Friday.

“Esafety is profoundly concerned about the creation and distribution of non-consensual explicit deepfake images, as these can lead to significant psychological and emotional distress.”

Commissioner Julie Inman Grant filed a case against Rotondo in federal court in 2023 due to his non-compliance with a deletion notice, which was ineffective as he is not an Australian resident.

“If you believe you’re in the right, I’ll secure an arrest warrant,” he said.

Following the court’s order for Rotondo to remove the images and refrain from sharing them, he sent them via email to 50 addresses, including the Esafety Commissioner and various media outlets.

Commissioners initiated federal court proceedings shortly after police ascertained that Rotondo had traveled from the Philippines to the Gold Coast.

Skip past newsletter promotions

He eventually acknowledged his actions as trivial.

The images were removed after Rotondo voluntarily provided passwords and necessary details to the Commissioner’s officers.




Source: www.theguardian.com

Universal Detectors Identify AI Deepfake Videos with Unprecedented Accuracy

Deepfake video showcasing Australian Prime Minister Anthony Albanese on a smartphone

Australia’s Associated Press/Alamy

Universal DeepFake Detectors have demonstrated optimal accuracy in identifying various types of videos that have been altered or entirely produced by AI. This technology can assist in flagging adult content, deepfake scams, or misleading political videos generated by unregulated AI.

The rise of accessible DeepFake Creation Tools powered by inexpensive AI has led to rampant online distribution of synthetic videos. Numerous instances involve non-consensual depictions of women, including celebrities and students. Additionally, deepfakes are utilized to sway political elections and escalate financial scams targeting everyday consumers and corporate leaders.

Nevertheless, most AI models designed to spot synthetic videos primarily focus on facial recognition. This means they excel in identifying a specific type of deepfake where a person’s face is swapped with existing footage. “We need a single video with a manipulated face and a model capable of detecting background alterations or entirely synthetic videos,” states Rohit Kundu from the University of California Riverside. “Our approach tackles that particular issue, considering the entire video could be entirely synthetically produced.”

Kundu and his team have developed a universal detector that leverages AI to analyze both facial features and various background elements within the video. It can detect subtle signs of spatial and temporal inconsistencies in deepfake content. Consequently, it identifies irregular lighting conditions for people inserted into face-swapped videos, as well as discrepancies in background details of fully AI-generated videos. The detector can even recognize AI manipulation in synthetic videos devoid of human faces, and it flags realistic scenes in video games like Grand Theft Auto V, independent of AI generation.

“Most traditional methods focus on AI-generated facial videos, such as face swaps and lip-synced content.” says Siwei Lyu from Buffalo University in New York. “This new method is broader in its applications.”

The universal detector reached an impressive accuracy rate of 95% to 99% in recognizing four sets of test videos featuring manipulated faces. This performance surpasses all previously published methods for detecting this type of deepfake. In evaluations of fully synthetic videos, it yielded more precise results than any other detectors assessed to date. Researcher I presented their findings at the 2025 IEEE Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee, on June 15th.

Several researchers from Google also contributed to the development of these new detectors. Though Google has not responded to inquiries regarding whether this detection method would be beneficial for identifying deepfakes on platforms like YouTube, the company is among those advocating for watermarking tools that help label AI-generated content.

The universal detectors have room for future enhancements. For instance, it would be advantageous to develop capabilities for detecting deepfakes utilized during live video conference calls—a tactic some scammers are now employing.

“How can you tell if the individual on the other end is genuine or a deepfake-generated video, even with network factors like bandwidth affecting the transmission?” asks Amit Roy-Chowdhury from the University of California Riverside. “This is a different area we’re exploring in our lab.”

Topics:

Source: www.newscientist.com

Commissioner Advocates for Ban on Apps Creating Deepfake Nude Images of Children

The “nudifice” app utilizing artificial intelligence to generate explicit sexual images of children is raising alarms, echoing concerns from English children’s commissioners amidst rising fears for potential victims.

Girls have reported refraining from sharing images of themselves on social media due to fears that generative AI tools could alter or sexualize their clothing. Although creating or disseminating sexually explicit images of children is illegal, the underlying technology remains legal, according to the report.

“Children express fear at the mere existence of this technology. They worry strangers, classmates, or even friends might exploit smartphones to manipulate them, using these specialized apps to create nude images,” a spokesperson stated.

“While the online landscape is innovative and continuously evolving, there’s no justifiable reason for these specific applications to exist. They have no rightful place in our society, and tools that enable the creation of naked images of children using deepfake technology should be illegal.”

De Souza has proposed an AI bill mandating that developers of generative AI tools address product functionalities, and has urged the government to implement an effective system for eliminating explicit deepfake images of children. This initiative should be supported by policy measures recognizing deep sexual abuse as a form of violence against women and girls.

Meanwhile, the report calls on Ofcom to ensure diligent age verification of nudification apps, and for social media platforms to restrict access to sexually explicit deepfake tools targeted at children, in accordance with online safety laws.

The findings revealed that 26% of respondents aged 13 to 18 had encountered deep, sexually explicit images of celebrities, friends, teachers, or themselves.

Many AI tools reportedly focus solely on female bodies, thereby contributing to an escalating culture of misogyny, the report cautions.

An 18-year-old girl conveyed to the commissioner:

The report highlighted cases like that of Mia Janin, who tragically died by suicide in March 2021, illustrating connections between deepfake abuse, suicidal thoughts, and PTSD.

In her report, De Souza stated that new technologies confront children with concepts they struggle to comprehend, evolving at a pace that overwhelms their ability to recognize the associated hazards.

The lawyer explained to the Guardian that this reflects a lack of understanding regarding the repercussions of actions taken by young individuals arrested for sexual offenses, particularly concerning deepfake experimentation.

Daniel Reese Greenhalgh, a partner at Cokerbinning law firm, noted that the existing legal framework poses significant challenges for law enforcement agencies in identifying and protecting abuse victims.

She indicated that banning such apps might ignite debates over internet freedom and could disproportionately impact young men experimenting with AI software without comprehension of the consequences.

Reece-Greenhalgh remarked that while the criminal justice system strives to treat adolescent offenses with understanding, previous efforts to mitigate criminality among youth have faced challenges when offenses occur in private settings, leading to unintended consequences within schools and communities.

Matt Hardcastle, a partner at Kingsley Napley, emphasized the “online youth minefield” surrounding access to illegal sexual and violent content, noting that many parents are unaware of how easily their children can encounter situations that lead to harmful experiences.

“Parents often view these situations from their children’s perspectives, unaware that their actions can be both illegal and detrimental to themselves or others,” he stated. “Children’s brains are still developing, leading them to approach risk-taking very differently.”

Marcus Johnston, a criminal lawyer focusing on sex crimes, reported working with an increasingly youthful demographic involved in such crimes, often without parental awareness of the issues at play. “Typically, these offenders are young men, seldom young women, ensnared indoors, while parents mistakenly perceive their activities as mere games,” he explained. “These offenses have emerged largely due to the internet, with most sexual crimes now taking place online, spearheaded by forums designed to cultivate criminal behavior in children.”

A government spokesperson stated:

“It is appallingly illegal to create, possess, or distribute child sexual abuse material, including AI-generated images. Platforms of all sizes must remove this content or face significant fines as per online safety laws. The UK is pioneering the introduction of AI-specific child sexual abuse offenses, making it illegal to own, create, or distribute tools crafted for generating abhorrent child sexual abuse material.”

  • In the UK, the NSPCC offers support to children at 0800 1111 and adults concerned about children can reach out at 0808 800 5000. The National Association of People Abused in Childhood (NAPAC) supports adult survivors at 0808 801 0331. In Australia, children, young adults, parents, and educators can contact the 1800 55 1800 helpline for children, or Braveheart at 1800 272 831. Adult survivors may reach the Blue Knot Foundation at 1300 657 380.

Source: www.theguardian.com

Scarlett Johansson raises concerns about AI dangers following viral Kanye West deepfake video

Scarlett Johansson raised concerns about the “immediate threat of AI” following the circulation of deepfake videos featuring her and other well-known Jewish celebrities in response to recent anti-Semitic comments made by Kanye West.

The deepfake video showcased AI-generated versions of numerous celebrities, such as Johansson, David Schwimmer, Jerry Seinfeld, Drake, Adam Sandler, Steven Spielberg, and Mila Kunis.

It began with a deepfake representation of Johansson wearing a t-shirt with raised hands and fingers adorned with the Star of David and Kanye’s name. The video was set to the tune of “Habanagira,” a traditional Jewish song typically played at celebratory cultural events and concluded with a message urging viewers to join the fight against anti-Semitism.

Other celebrities depicted in the video included Sacha Baron Cohen, Jack Black, Natalie Portman, Adam Levine, Ben Stiller, and Lenny Kravitz.

Johansson expressed her distress over the dissemination of AI-generated videos featuring her likeness online in response to anti-Semitic sentiments. In a statement to People, she stated, “As a Jewish woman, I unequivocally denounce all forms of anti-Semitism and hate speech. However, I believe the potential dangers posed by hate speech-enabled AI are far more concerning. We must hold AI accountable, as it presents a significant threat. Regardless of the AI’s message, there is a risk of inciting misuse of AI or real-life repercussions.”

A user known as Nishi made derogatory remarks, self-identifying as a “Nazi,” and lauding Hitler on social media before deactivating their account.

Nishi also featured in advertisements during the Super Bowl and directed viewers to their website, which was subsequently shut down by Shopify for policy violations. Fox TV station CEO Jack Abernethy also criticized the ads in a memo to staff.

Johansson has been an outspoken advocate against the unauthorized use of AI. She previously threatened legal action against OpenAI for using a voice resembling hers in their ChatGPT product. OpenAI eventually removed the prominently featured audio option from ChatGPT following significant backlash.

Johansson emphasized, “While I have been a prominent target of AI misuse, the reality is that the threat of AI affects us all.”

She further stated, “There is a pressing need for progressive nations to enact regulations safeguarding citizens from the imminent perils posed by AI, which regrettably the US government appears inert in addressing. It is alarming that the US lags in taking action.”

The actor urged lawmakers to enact legislation combating AI abuse, highlighting it as “a bipartisan issue with profound implications for humanity’s immediate future.”

These remarks coincide with the UK Advertising Standards Authority’s report stating the prevalence of fake ads featuring celebrities as the most widespread form of fraudulent online ads.

The AI-generated video was created by Ori Bejerano, as indicated in his Instagram Bio. His original post noted that the content was digitally altered or generated with AI to create a realistic appearance.

Source: www.theguardian.com

The Illusion of God: Exploring the Pope’s Popularity as a Deepfake Image in the Age of Artificial Intelligence

For Pope, it was the wrong kind of Madonna.

The pop legend behind the ’80s anthem “Like a Prayer” has been at the center of controversy in recent weeks after posting a deepfake image of the Pope hugging her on social media. This further fanned the flames of an already heated debate over the creation of AI art, in which Pope Francis plays a symbolic and unwilling role.

Catholic Church leaders are accustomed to being subject to AI fabrications. One of the defining images of the AI boom was Francis wearing a Balenciaga down jacket. The stunningly realistic photo went viral last March and was seen by millions of people. But Francis didn’t understand the funny side. In January, he referenced the Balenciaga image in a speech on AI and warned about the impact of deepfakes.


An AI-generated image of Pope Francis wearing a down jacket. Illustration: Reddit

“Fake news…Today, ‘deepfakes’ – the creation and dissemination of images that appear completely plausible but false – can be used. I have been the subject of this as well.” he said.

Other deepfakes include Francis wearing a pride flag and holding an umbrella on the beach. Like the Balenciaga images, these were created by the Midjourney AI tool.

Rick Dick, the Italian digital artist who created the image of Madonna, told the Guardian that he did not intend to offend with the photo of Frances putting his arm around Madonna’s waist and hugging her. Another image on Rick Dick’s Instagram page seamlessly merges a photo of the Pope’s face with that of Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson. They are more likely to be offended.


AI image of Madonna and Pope Francis. Illustration: @madonna/Instagram

Rickdick said Mangione’s image was intended to satirize the American obsession with Mangione being “elevated into a god-like figure” online.

“My goal is to make people think and, if possible, smile,” said the artist, who goes by the stage name Rick Dick, but declined to give his full name.

He said that memes (viral images that are endlessly tweaked and reused online) are our “new visual culture, fascinated by their ability to convey deep ideas quickly.”

Experts say the Pope is a clear target for deepfakes because of the vast digital “footprint” of videos, images, and audio recordings associated with him. AI models are trained on the open internet, which is filled with content featuring prominent public figures, from politicians to celebrities to religious leaders.

Sam Stockwell, a researcher at Britain’s Alan Turing Institute, said: “The Pope is frequently featured in public life and there are vast amounts of photos, videos, and audio clips of him on the open web.” said.

“Because AI models are often trained indiscriminately on such data, these models are more sensitive to the facial features and facial features of individuals like the Pope than models with less large digital footprints. It makes it much easier to reproduce the similarities.”

Rick Dick said the AI model he used to create the photo of Francis that was posted to his Instagram account and then reposted by Madonna was created on a paid platform called Krea.ai by the pope and the pop star. It is said that the robot was trained specifically for images. However, realistic photos of Francis can also be easily created using freely accessible models such as Stable Diffusion, which allows users to place Francis on a bicycle or on a soccer field with a few simple prompts.

Stockwell added that there is also an obvious appeal to juxtaposing powerful figures with unusual or embarrassing situations, which is a fundamental element of satire.

“He is associated with strict rules and traditions, so some people want to deepfake him in unusual situations compared to his background,” he said.

Adding AI to the satirical mix will likely lead to more deepfakes from the Pope.

“I like to use celebrities, objects, fashion, and events to mix the absurd and the unconventional to provoke thought,” said Rick Dick. “It’s like working on a never-ending puzzle, always looking for new creative connections. The Pope is one of my favorite subjects to work on.”

Source: www.theguardian.com

Meta removes over 9,000 fraudulent Facebook pages costing Australians $43.4 million in celebrity deepfake scams

After Meta launched a new platform for sharing fraud information with banks, celebrities and others were taken away in handcuffs. The platform blocked 8,000 pages and 9,000 celebrity scams, reducing the likelihood of Australians seeing deepfake images promoting fraudulent crypto investments on Facebook. This occurred in the first 6 months following the launch.

Between January and August 2024, Australians reported $43.4 million in losses to social media scams through Scamwatch, with almost $30 million related to fake investment scams.

Meta has been dealing with scams using deepfake images of celebrities like David Koch, Gina Reinhart, Anthony Albanese, Larry Emdur, and Guy Sebastian. Politicians and regulators have pressured the company to address these scams, especially those facilitating investment fraud.

Mining tycoon Andrew Forrest is suing Meta for failing to address fraudulent activity using his image.

Meta has partnered with the Australian Financial Crime Exchange (AFCX) to launch the Fraud Information Exchange (Fire). This channel allows banks to report known fraud to Meta, enabling Meta to notify all banks involved in fraud discovered on its platform.

Seven banks, including ANZ, Bendigo Bank, CBA, HSBC, Macquarie, NAB, and Westpac, are participating in the Fire program. Another program involving AFCX’s Intel Loop information sharing service includes banks like Optus, Pivotel, Telstra, TPG, and the National Anti Scams Center.

Since the pilot launch in April, Meta has removed over 9,000 fraudulent pages and 8,000 AI-generated celebrity investment scams on Facebook and Instagram based on 102 reports received.

While the early results are promising, the number of fire reports is low compared to the losses reported to Scamwatch, with 1,600 reported losses in social media scams in August alone.

Meta reported removing 1.2 billion fake accounts worldwide in the last quarter, with 99.7% removed before user reports.

AFCX’s Rhonda Lau mentioned that the program aims to make Australia a less attractive target for fraudsters.

Meta’s David Agranovich stated that the system will help detect fraud outside the platform, connecting the dots between fraudulent activities on Facebook and Instagram.

Meta provides the list of blocked domains to partners and will grant access to the Fire platform to its threat exchange system to detect criminal activity like covert influence operations and child abuse on the platform.

Mr. Agranovich acknowledged the frustration Australians may face in reporting fraud to Meta and mentioned plans for improvement.

Both the Commonwealth Bank and ANZ welcomed the collaboration with Meta. Deputy Treasurer Stephen Jones introduced a draft bill to combat fraud and provide a proper dispute resolution process for fraud victims, with consultations ending on 4th October.

Source: www.theguardian.com

Federal police union advocates for creation of portal for reporting AI deepfake victimization

The federal police union is calling for the establishment of a dedicated portal where victims of AI deepfakes can report incidents to the police. They expressed concern over the pressure on police to quickly prosecute the first person charged last year for distributing deepfake images of women.

Attorney General Mark Dreyfus introduced legislation in June to criminalize the sharing of sexually explicit images created using artificial intelligence without consent. The Australian Federal Police Association (Afpa) supports this bill, citing challenges in enforcing current laws.

Afpa highlighted a specific case where a man was arrested for distributing deepfake images to schools and sports associations in Brisbane. They emphasized the complexities of investigating deepfakes, as identifying perpetrators and victims can be challenging.

Afpa raised concerns about the limitations of pursuing civil action against deepfake creators, citing the high costs and challenges in identifying the individuals responsible for distributing the images.

They also noted the difficulty in determining the origins of deepfake images and emphasized the need for law enforcement to have better resources and legislation to address this issue.

Skip Newsletter Promotions

The federal police union emphasized the need for better resources and legislation to address the challenges posed by deepfake technology, urging for an overhaul of reporting mechanisms and an educational campaign to raise awareness about this issue.

The committee is set to convene its first hearing on the proposed legislation in the coming week.

Source: www.theguardian.com

Signs of a Deepfake: Dirty chins, strange hands, and odd numbers

This is a crucial election year for the world, with misinformation swirling on social media as countries including the UK, US and France go to the polls.

There are major concerns about whether deepfakes – images and audio of key politicians created using artificial intelligence to mislead voters – could influence election outcomes.

While it has not been a major talking point in the UK elections so far, examples are steadily emerging around the world, including in the US, where a presidential election is looming.

Notable visual elements include:

Discomfort around the mouth and jaw

In deepfake videos, the area around the mouth can be the biggest clue: There may be fewer wrinkles on the skin, less detail around the mouth, and a blurry or smudged chin. Poor syncing between a person’s voice and mouth is another telltale sign.

The deepfake video, posted on June 17, shows Nigel Farage simulating the destruction of Rishi Sunak’s house in Minecraft. Deepfake satire trend A video showing politicians playing online games.

A few days later, Another Simulation Video Keir Starmer was seen playing Minecraft and setting up traps in “Nigel’s Pub”.

Dr Mhairi Aitken, an ethics researcher at the Alan Turing Institute, the UK’s national AI lab, says the first feature of Minecraft deepfakes is, of course, the “absurdity of the situation”, but another sign of AI-generated media and manipulation is the imperfect synchronization of voice and mouth.

“This is particularly clear in the section where Farage is speaking,” Aitken said.

Another way to tell, Aitken says, is to see if shadows fall in the right places, or if lines and creases in the face move in the way you expect them to.

Ardi Djandzheva, a researcher at the institute, added that the low resolution of the overall video is another telltale sign people should look out for because it “looks like something that was quickly stitched together.” He said people have become accustomed to this amateurish technique due to the prevalence of “rudimentary, low-resolution scam email attempts.”

This lo-fi approach also shows up in prominent areas like the mouth and jawline, he says: “There’s an excessive blurring and smudge of facial features that are the focus of the viewer’s attention, like the mouth.”

Strange elements of the speech

Another deepfake video featured audio edited from Keir Starmer’s 2023 New Year’s speech pitching an investment scheme.

If you listen closely, you’ll notice some odd sentence structure: Starmer repeatedly says “pound” before a figure, for example “pound 35,000 per month”.

Aitken said the voice and mouth were again out of sync and the lower part of the face was blurred, adding that the use of “pounds” before the numbers suggested a text-to-speech tool had probably been used to recreate Starmer’s voice.

“This mirrors typical spoken language patterns, as it is likely a written-to-speech tool was used, which has not been confirmed,” she says. “There are clues in the intonation as well, which maintains a fairly monotonous rhythm and pattern throughout. A good way to check the authenticity of a video is to compare the voice, mannerisms and expressions to a recording of a real person to see if there is consistency.”

Face and body consistency

This deepfake video of Ukrainian President Volodymyr Zelensky calling on civilians to lay down their arms to Russian forces was circulated in March 2022. The head is disproportionately large compared to the rest of the body, and the skin on the neck and face is a different color.

Hany Farid, a professor at the University of California, Berkeley and an expert on deepfake detection, said this is “a classic deepfake.” The immobile body is the telltale sign, he said. “The defining feature of this so-called Puppet Master deepfake is that the body is immobile from the neck down.”

Discontinuities throughout the video clip

The video, which went viral in May 2024, falsely shows U.S. State Department spokesman Matthew Miller telling a reporter that “there are virtually no civilians left in Belgorod,” justifying the Ukrainian military’s attack on the Russian city of Belgorod. The video was tweeted by the Russian embassy in South Africa and has since been removed, according to Russian media. BBC journalist.

Source: www.theguardian.com

Detecting A Deepfake: Top Tips Shared by Detection Tool Maker

As a human, you will play a crucial role in identifying whether a photo or video was created using artificial intelligence.

Various detection tools are available for assistance, either commercially or developed in research labs. By utilizing these deepfake detectors, you can upload or link to suspected fake media, and the detector will indicate the likelihood that it was generated by AI.

However, relying on your senses and key clues can also offer valuable insights when analyzing media to determine the authenticity of a deepfake.

Although the regulation of deepfakes, especially in elections, has been slow to catch up with AI advancements, efforts must be made to verify the authenticity of images, audio, and videos.

One such tool is the Deepfake Meter developed by Siwei Lyu at the University at Buffalo. This free and open-source tool combines algorithms from various labs to help users determine if media was generated by AI.

The DeepFake-o-meter demonstrates both the advantages and limitations of AI detection tools by rating the likelihood of a video, photo, or audio recording being AI-generated on a scale from 0% to 100%.

AI detection algorithms can exhibit biases based on their training, and while some tools like DeepFake-o-meter are transparent about their variability, commercial tools may have unclear limitations.

Lyu aims to empower users to verify the authenticity of media by continually improving detection algorithms and encouraging collaboration between humans and AI in identifying deepfakes.

audio

A notable instance of a deepfake in US elections was a robocall in New Hampshire using an AI-generated voice of President Joe Biden.

When subjected to various detection algorithms, the robocall clips showed varying probabilities of being AI-generated based on cues like the tone of the voice and presence of background noise.

Detecting audio deepfakes relies on anomalies like a lack of emotion or unnatural background noise.

photograph

Photos can reveal inconsistencies with reality and human features that indicate potential deepfakes, like irregularities in body parts and unnatural glossiness.

Analyzing AI-generated images can uncover visual clues such as misaligned features and exaggerated textures.

An AI-generated image purportedly showing Trump and black voters. Photo: @Trump_History45

Discerning the authenticity of AI-generated photos involves examining details like facial features and textures.

video

Video deepfakes can be particularly challenging due to the complexity of manipulating moving images, but visual cues like pixelated artifacts and irregularities in movements can indicate AI manipulation.

Detecting deepfake videos involves looking for inconsistencies in facial features, mouth movements, and overall visual quality.

The authenticity of videos can be determined by analyzing movement patterns, facial expressions, and other visual distortions that may indicate deepfake manipulation.

Source: www.theguardian.com

AI Used to Create Tom Cruise Deepfake Video Targeting Paris Olympics for Russia

According to a new report from Microsoft, Russia is engaging in a disinformation campaign targeting the Paris Olympics. This includes the use of a deepfake video featuring Tom Cruise as the narrator of a critical documentary about the organization behind the games. You can read the full report on Microsoft’s website.

Microsoft revealed that a network of pro-Russian groups is conducting a “malign influence campaign” against France, President Emmanuel Macron, the International Olympic Committee (IOC), and the upcoming games in Paris. Despite Russia’s ban from the 2024 Olympics, a few Russian athletes may still participate as neutrals.

One of the tactics used by the disinformation campaign was a fake video of Tom Cruise on Telegram titled “Olympics Has Fallen.” The video, a parody of the movie “Olympus Has Fallen,” falsely claimed to be a Netflix production, featured a fake Cruise voice, and criticized the IOC. Microsoft deemed this video to be a more sophisticated creation compared to typical influence campaigns. You can access the full report released on Monday for more information.

The fake video was attributed to a Kremlin-linked group called Storm1679, known for its history of deceiving US actors. Storm1679 has been spreading fear through various videos about potential violence during the Olympics, alongside fake news broadcasts impersonating Euronews and France 24 to instill false narratives about the event.

Social media accounts associated with Storm 1679 have also posted images of graffiti in Paris threatening violence against Israelis attending the Olympics. Microsoft reported that these images were likely digitally generated rather than physically present.

Russia has a history of trying to disrupt Olympic events, with strategies dating back to the Soviet Union’s boycott of the 1984 Los Angeles Games. Another Russian group, Storm-1099 or “Doppelganger,” has launched a fake French news site spreading allegations of corruption at the IOC and potential violence in Paris.

Microsoft warned that Russia’s disinformation efforts might expand to other languages and involve the use of automated accounts and generative AI systems to create convincing fake content. This mirrors similar Chinese attempts to spread disinformation using AI-generated materials, as detailed in a previous report by Microsoft.

Source: www.theguardian.com

Arup, a British engineering firm, duped out of £20m in deepfake scam

Arup, a British engineering firm, fell victim to a deepfake scam when an employee mistakenly transferred HK$200 million (approximately 20 million yen) to criminals during an artificial intelligence-generated video call.

Reports from Hong Kong police in February revealed that an employee of an unnamed company was duped into sending a large sum of money in a fraudulent call impersonating a company executive.

Arup confirmed that they were the company involved and had reported the incident to the Hong Kong police earlier this year. They admitted that fake audio and video had been used in the fraud.

The company stated, “Our financial stability and business operations remained unaffected, and there was no compromise to our internal systems.”

Arup’s global chief information officer, Rob Greig, mentioned that the organization faces frequent cyberattacks, including deepfakes, as seen in this incident.

Greig emphasized the need for increased awareness regarding the sophistication of cyber attackers, especially after Arup’s experience.

A report from the Financial Times newspaper first identified Arup as the target of the scammers.

Arup, known as one of the world’s leading consulting engineering firms, employs over 18,000 individuals and is recognized for its involvement in projects like the Sydney Opera House and London’s Crossrail transport scheme.

Another recent case involving a deepfake scam targeted WPP CEO Mark Read, as reported by The Guardian last week.

Hong Kong police disclosed that employees transferred HK$200 million in total to five local bank accounts in 15 transactions during a video conference call where the perpetrators posed as senior company officials.

The investigation into the scam is ongoing, but no arrests have been made yet, with the case classified as “obtaining property by deception.”

Source: www.theguardian.com

Producing sexually explicit deepfake images is a crime in the UK | Deepfakes

The Ministry of Justice has declared that the creation of sexually explicit “deepfake” images will soon be considered a criminal offense under new legislation.

Those found guilty of producing such images without consent could face a criminal record, an unlimited fine, and possible imprisonment if these images are distributed widely.

The ministry stipulates that creating a deepfake image will be punishable, irrespective of the creator’s intentions for sharing it. Last year’s online safety laws already criminalize the dissemination of intimate deepfakes, made easier by advancements in artificial intelligence technology.

The offense is anticipated to be added to the Criminal Justice Bill currently under parliamentary review. Minister Laura Farris affirmed that the creation of deepfake sexual content is unacceptable under any circumstances.

“This reprehensible act of degrading and dehumanizing individuals, particularly women, will not be tolerated. The potential repercussions of widespread sharing of such material can be devastating. This government is unwavering in its stance against it.”

Yvette Cooper, the Shadow Home Secretary, voiced support for the new law, stating: “It is imperative to criminalize the production of deepfake pornography. Imposing someone’s image onto explicit content violates their autonomy and privacy, posing significant harm and must be condemned.

Law enforcement must be equipped with the necessary training and resources to enforce these laws rigorously and dissuade offenders from acting with impunity,” added Cooper.

Deborah Joseph, editor-in-chief of Glamor UK, lauded the proposed amendments, citing a survey revealing that 91% of readers perceive deepfake technology as a threat to women’s safety. Personal accounts from victims emphasized the severe impact of this activity.

“While this marks a crucial initial step, there remains a considerable journey ahead for ensuring women feel completely safeguarded from this atrocious practice,” asserted Joseph.

Source: www.theguardian.com

Family brings battle against deepfake nude images to Washington | Deepfakes

Francesa Mani returned home from school in suburban New Jersey last October and shared shocking news with her mother, Dorota.

At Westfield High School, a 14-year-old girl and her friends were targeted with abuse through the distribution of fake nude images created using artificial intelligence.

Dorota, aware of the power of this technology, was surprised by how easily the images were generated.

She expressed her disbelief, stating, “With just a single image, I didn’t anticipate how quickly this could happen. It’s a risk for anyone at the simple click of a button.”

An investigation by The Guardian’s Black Box podcast series revealed the origins and operators of an app called ClothOff, which was used to create the explicit images at Westfield High School.

Francesca and Dorota decided to take action after feeling dissatisfied with the school board’s response to the incident. They began advocating for new legislation at both the state and federal levels to hold creators of non-consensual, sexually explicit deepfakes accountable.

The growing number of cases like the one at Westfield High School has highlighted the gaps in existing laws and the urgent need for stronger protections, especially for minors.

NCMEC is collaborating with the Mani family to investigate the further spread of the images generated at the school.

While the school district initiated an investigation and offered counseling to affected students, the lack of criminal repercussions for the perpetrators due to current laws is a major concern for the victims’ families.

ClothOff denied involvement in the incident and suggested that a competing app may have been responsible.

Francesca and Dorota’s efforts have led to the introduction of bills in Congress to criminalize the sharing of AI-generated images without consent and provide victims with legal recourse.

Despite bipartisan support for these bills, progress has been slow due to other pressing issues in government, but efforts to address the misuse of AI technology continue at both the state and federal levels.

A bipartisan push to create deterrents against the creation and dissemination of deepfakes is gaining momentum as more states consider legislation to address the issue.

Incidents similar to the one at Westfield High School have occurred across the country, highlighting the urgent need for comprehensive laws to combat the misuse of AI technology.

Francesca and Dorota, along with other affected families, are committed to ensuring accountability for those responsible for creating and distributing deepfake images.

Their advocacy has drawn attention to the need for stronger legal protections against AI-generated deepfakes, emphasizing the importance of preventing further harm to vulnerable individuals.

Source: www.theguardian.com

AI deepfake technology advances as billions get ready to vote in a packed election year | 2024 US Elections

“How awful!”

Gail Huntley picked up the phone and immediately recognized Joe Biden's raspy voice. Huntley, a 73-year-old New Hampshire resident, had planned to vote for the president in the state's upcoming primary and was perplexed when she received a prerecorded message urging her not to vote.

“It's important to save your vote for the November election,” the message said. “Only this Tuesday's vote will allow the Republican Party to seek re-election of Donald Trump.”

Huntley quickly realized the call was fake, but thought Biden's words had been taken out of context. She was shocked when it was revealed that the recording was generated by AI. Within weeks, the United States outlawed robocalls that use AI-generated voices.

The Biden deepfake was the first major test for governments, tech companies, and civil society groups. Governments, technology companies and civil society organizations are grappling with how best to police an information ecosystem where anyone can create photorealistic images of candidates or replicate their voices. It is embroiled in a heated debate. Terrifying accuracy.

As citizens of dozens of countries, including the US, India and possibly the UK, go to the polls in 2024, experts say democratic processes are at serious risk of being disrupted by artificial intelligence. .

AI fakes are already being used in elections Slovakia,Taiwan, Indonesiaand they are thrown into an environment where trust in politicians, institutions and media is already low.

Watchdog groups have warned that more than 40,000 people have been laid off at the tech companies that host and manage much of this content, and that digital media is uniquely vulnerable to abuse.

Mission Impossible?

For Biden, concerns about the potentially dangerous uses of AI spiked after watching the latest Mission: Impossible movie. Over the weekend at Camp David, the president relaxed in front of a movie in which Tom Cruise's Ethan Hunt takes on a rogue AI.

After watching the film, White House Deputy Chief of Staff Bruce Reid said that if Biden wasn't already concerned about what could go wrong with AI, “he has much more to worry about.” It turns out there are a lot of them.”

Since then, Biden has signed an executive order requiring major AI developers to share safety test results and other information with the government.

And the United States is not alone in taking action. The EU is about to pass one of the most comprehensive laws to regulate AI, but it won't come into force until 2026. Proposed regulations in the UK have been criticized for moving too slowly.

But because the United States is home to many of the most innovative technology companies, the White House's actions will have a major impact on how the most disruptive AI products are developed.

Katie Harvath, who spent a decade helping shape policy at Facebook and now works on trust and safety issues at tech companies, says the U.S. government isn't doing enough. Concerns about stifling innovation could play into this, especially as China moves to develop its own AI industry, she says.

Harvath discusses how information systems have evolved from the “golden age” of social media growth, to the Great Reckoning after the Brexit and Trump votes, and the subsequent efforts to stay ahead of disinformation. I watched what happened from my ringside seat.

Her mantra for 2024 is “panic responsibly.”

In the short term, she says, the regulators and polices for AI-generated content will be the very companies developing the tools to create it.

“I don't know if companies are ready,” Harvath said. “There are also new platforms whose first real test will be this election season.”

Last week, major tech companies signed an agreement to voluntarily adopt “reasonable precautions” to prevent AI from being used to disrupt democratic elections around the world, and to coordinate efforts. We took a big step.

Signatories include OpenAI, the creator of ChatGPT, as well as Google, Adobe, and Microsoft, all of which have launched tools to generate AI-authored content. Many companies have also updated their own rules to prohibit the use of their products in political campaigns.. Enforcing these bans is another matter.

OpenAI, which uses its powerful Dall-E software to create photorealistic images, said its tool rejects requests to generate images of real people, including candidates.

Midjourney, whose AI image generation is considered by many to be the most powerful and accurate, says users should not use the product to “attempt to influence the outcome of a political campaign or election.” Says.

Midjourney CEO David Holtz said the company is close to banning political images, including photos of leading presidential candidates. It appears that some changes are already in effect. When the Guardian asked Midjourney to produce an image of Joe Biden and Donald Trump in a boxing ring, the request was denied, saying it violated the company's community standards. A flag was raised.

But when I entered the same prompt, replacing Biden and Trump with British Prime Minister Rishi Sunak and opposition leader Keir Starmer, the software produced a series of images without a problem.

This example is at the center of concerns among many policymakers about how effectively tech companies are regulating AI-generated content outside the hothouse of the U.S. presidential election.

“Multi-million euro weapons of mass operation”

Despite OpenAI's ban on using its tools in political campaigns, its products were used to create campaign art, track social media sentiment, build interactive chatbots, and engage voters in Indonesia's elections this month. Reuters reported that it was widely used as a target.

Harvath said it's an open question how startups like OpenAI can aggressively enforce their policies outside the United States.

“Each country is a little different, with different laws and cultural norms. When you run a US-focused company, you realize that things work differently in the US than they do in other parts of the world. can be difficult.”

Last year's national elections in Slovakia pitted pro-Russian candidates against those advocating stronger ties with the EU. Ballot papers include support for Ukraine's war effort, and EU officials say the vote could be at risk of interference by Russia and its “multi-million euro weapons of mass manipulation” emphasized by those.

As the election approached and a national media blackout began, an audio recording of pro-EU candidate Michal Šimeka was posted on Facebook.

In the recording, Simechka appears to discuss ways to rig elections by buying votes from marginalized communities. The audio was fake, and AFP news agency reported that it appeared to have been manipulated using AI.

However, media outlets and politicians are required to remain silent under election concealment laws, making it nearly impossible to uncover errors in the recording.

The doctored audio appears to have fallen through a loophole in how Facebook owner Meta Inc. polices AI-generated material on its platform.below it community standardsprohibits posting content that has been manipulated in a way that “the average person wouldn't understand,” or that has been edited to make someone say something they didn't say. However, this only applies to videos.

Pro-Russian candidate Robert Fico won the election and became prime minister.

When will we know that the future is here?

Despite the dangers, there are some signs that voters are better prepared for what's to come than officials think.

“Voters are smarter than we think,” Harvath said. “They may be overwhelmed, but they understand what's going on in the information environment.”

For many experts, the main concern is not the technologies we are already working on, but the innovations that are on the other side of the horizon.

Writing in MIT's Technology Review, academics said the public debate about how AI threatens democracy is “lacking imagination.” The real danger, they say, is not what we already fear, but what we cannot yet imagine.

“What rocks are we not examining?” Halvath asks. “New technologies emerge, new bad guys emerge. There are constant high and low tides, and we have to get used to living with them.”

Source: www.theguardian.com

Iran-affiliated hackers disrupt UAE TV streaming service by creating fake news using deepfake technology

According to Microsoft analysts, Iranian state-backed hackers disrupted a television streaming service in the United Arab Emirates and broadcast a deepfake newsreader distributing reports on the Gaza war.

Microsoft announced that a hacking operation by the Islamic Revolutionary Guards Corps disrupted streaming platforms in the UAE with an AI-generated news broadcast dubbed “For Humanity.”

The fake news anchors introduced unverified images showing wounded and killed Palestinians in Israeli military operations in Gaza. The hacker group known as Cotton Sandstorm hacked three online streaming services and published a video on the messaging platform Telegram showing them disrupting a news channel with fake newscasters, according to Microsoft analysts.

Dubai residents using HK1RBOXX set-top boxes received a message in December that read, “To get this message to you, we have no choice but to hack you,” the UAE-based news service said. The AI-generated anchor then introduced a message that read: “Graphic” images and captions showing the number of casualties in Gaza so far.

Microsoft also noted reports of disruptions in Canada and the United Kingdom, where channels including the BBC were affected, although the BBC was not directly hacked.

In a blog post, Microsoft said, “This is the first Iranian influence operation where AI plays a key element in messaging, and is an example of the rapid and significant expansion of the Iranian operation’s scope since its inception.”

“The confusion was also felt by viewers in the UAE, UK, and Canada.”

Breakthroughs in generative AI technology have led to an increase in deepfake content online, which has raised concerns about its potential to disrupt elections, including the US presidential election.

Experts are concerned that AI-generated materials could be deployed on a large scale to disrupt elections this year, including the US presidential election. Iran targeted the 2020 US election with a cyber campaign that included sending threatening emails to voters posing as members of the far-right Proud Boys group and launching a website inciting violence against FBI Director Christopher Wray and others. Spreading disinformation about voting infrastructure.

Microsoft said that since the Oct. 7 Hamas attack, Iranian state-backed forces have engaged in a series of cyberattacks and attempts to manipulate public opinion online, including attacks on targets in Israel, Albania, Bahrain (a signatory to the Abraham Accords formalizing relations with Israel), and the US.

Source: www.theguardian.com

Mr. Sunak in Deepfake Video Ads on Facebook Issuing Election AI Warning

According to a study, more than 100 deepfake video ads impersonating Rishi Sunak were paid to promote on Facebook in the last month alone. This study warns of the risks posed by AI ahead of the general election.

The ads may have reached up to 400,000 people, despite potentially violating some of Facebook’s policies. It was the first time a prime minister’s image had been systematically defaced all at once.

Over £12,929 was spent on 143 ads from 23 countries, including the US, Turkey, Malaysia, and the Philippines.

One ad includes a breaking news story in which BBC newsreader Sarah Campbell falsely claims that a scandal has broken out centering on Mr. Sunak. It also includes a fake video that appears to be reading out loud.

The article falsely claims that Elon Musk has launched an application that can “collect” stock market trades and suggests the government should test the application. It includes a fabricated clip of Mr. Sunak saying he has made the decision.

The clip leads to a fake BBC news page promoting fraudulent investments.

research

The scheme was carried out by Fenimore Harper, the communications company founded by Marcus Beard, a former Downing Street official who was the number 10 head of counter-conspiracy theory during the coronavirus crisis. He warned that this ad, which shows a change in the quality of fakes, shows that this year’s election is at risk of being manipulated by a large amount of high-quality falsehoods generated by AI.

“With the advent of cheap and easy-to-use voice and facial cloning, little knowledge or expertise is required to use a person’s likeness for malicious purposes.”

“Unfortunately, this problem is exacerbated by lax moderation policies for paid ads. These ads violate several of Facebook’s advertising policies. However, few of the ads we found were removed. There was very little.”

Meta, the company that owns Facebook, has been contacted for comment.

A UK government spokesperson said: “We work widely across government, through the Democracy Defense Task Force and dedicated government teams, to ensure we respond quickly to any threats to democratic processes.”

“Our online safety laws go further by creating new requirements for social platforms to quickly remove illegal misinformation and disinformation – even if it is generated by AI – as it becomes aware of it.”

A BBC spokesperson said: “In a world where disinformation is on the rise, we urge everyone to ensure they get their news from trusted sources. We are committed to tackling the growing threat of disinformation. In 2023, we launched BBC Verify to investigate, fact-check, verify video, counter disinformation, analyze data and explain complex stories using a range of forensic and open source intelligence (OSINT) tools. We invest in a highly specialized team with

“We build trust with our viewers by showing them how BBC journalists know the information they report and explaining how to spot fake and deepfake content. When we become aware of fake content, we take swift action.”

Regulators are concerned that time is running out to enact sweeping changes to ensure Britain’s electoral system is ready for advances in artificial intelligence before the next general election, expected to be held in November.

The government continues to consult with regulators, including the Electoral Commission, and under legislation from 2022 there will be new requirements for digital campaign materials to include ‘imprints’, allowing voters to control who spends on advertising. This will ensure that you know who has paid and who is participating in your ads. To influence them.

Source: www.theguardian.com