Exposing Degradation: The Tale of Deepfakes, the Infamous AI Porn Hub | Technology

Patrizia Schlosser’s ordeal began with a regretful call from a colleague. “I found this. Did you know?” he said, sharing a link that led her to a site called Mr. DeepFakes. Here, she was horrified to discover fabricated images portraying her in degrading scenarios, labeled “Patrizia Schlosser’s slutty FUNK whore” (sic).

“They were highly explicit and humiliating,” noted Schlosser, a journalist for North German Radio (NDR) and funk. “Their tactics were disturbing and facilitated their ability to distance themselves from the reality of the fakes. It was unsettling to think about someone scouring the internet for my pictures and compiling such content.”

Despite her previous investigations into the adult film sector, this particular site was unfamiliar. “I had never come across Mr. DeepFakes before. It’s a platform dedicated to fake pornographic videos and images. I was taken aback by its size and the extensive collection of videos featuring every celebrity I knew.” Initially, Schlosser attempted to ignore the images. “I shoved it to the back of my mind as a coping mechanism,” she explained. “Yet, even knowing it was fake, it felt unsettling. It’s not you, but it is you—depicted alongside a dog and a chain. I felt violated and confused. Finally, I resolved to act. I was upset and wanted those images removed.”

With the help of NDR’s STRG_F program, Schlosser successfully eliminated the images. She located the young man responsible for their creation, even visiting his home and conversing with his mother (the perpetrator himself remained hidden away). However, despite collaboration with Bellingcat, she could not identify the individual behind Mr. Deepfake. Ross Higgins, a member of the Bellingcat team, noted, “My background is in money laundering investigations. When we scrutinized the site’s structure, we discovered it shared an internet service provider (ISP) with a legitimate organized crime group.” These ISPs hinted at connections to the Russian mercenary group Wagner and individuals mentioned in the Panama Papers. Additionally, advertisements on the site featured apps owned by Chinese tech companies that provided the Chinese government with access to user data. “This seemed too advanced for a mere hobbyist site,” Higgins remarked.

And indeed, that was just the beginning of what unfolded.

The narrative of Mr. Deepfakes, recognized as the largest and most infamous non-consensual deepfake porn platform, aligns closely with the broader story of AI-generated adult content. The term “deepfake” itself is believed to have originated with its creator. This hub of AI pornography, which has been viewed over 2 billion times, features numerous female celebrities, politicians, European royals, and even relatives of US presidents in distressing scenarios including abductions, tortures, and extreme forms of sexual violence. Yet, the content was merely a “shop window” for the site; the actual “engine room” was the forum. Here, anyone wishing to commission a deepfake of a known person (be it a girlfriend, sister, classmate, colleague, etc.) could easily find a vendor to do so at a reasonable price. This forum also served as a “training ground,” where enthusiasts exchanged knowledge, tips, academic papers, and problem-solving techniques. One common challenge was how to create deepfakes without an extensive “dataset,” focusing instead on individuals with limited online images, like acquaintances.

Filmmaker and activist Sophie Compton invested considerable time monitoring deepfakes while developing her acclaimed 2023 documentary, Another Body (available on iPlayer). “In retrospect, that site significantly contributed to the proliferation of deepfakes,” she stated. “There was a point at which such platforms could have been prevented from existing. Deepfake porn is merely one facet of the pervasive issue we face today. Had it not been for that site, I doubt we would have witnessed such an explosion in similar content.”

The origins of Mr. Deepfakes trace back to 2017-18 when AI-generated adult content was first emerging on platforms like Reddit. An anonymous user known as “Deepfake,” recognized as a “pioneer” in AI porn, mentioned in early interviews with Vice the potential for such material. However, after Reddit prohibited deepfake pornography in early 2018, the nascent community reacted vigorously. Compton noted, “We have records of discussions from that period illustrating how the small deepfake community was in uproar.” This prompted the creation of Mr. DeepFakes, which initially operated under the domain dpfks.com. The administrator retained the same username, gathered moderators, and outlined regulations, guidelines, and comprehensive instructions for using deepfake technology.

“It’s disheartening to reflect on this chapter and realize how straightforward it could have been for authorities to curb this phenomenon,” Compton lamented. “Participants in this process believed they were invulnerable, expressing thoughts like, ‘They’ll come for us!’ and ‘They’ll never allow us this freedom!'” Yet, as they continued with minimal repercussions, their confidence grew. Moderation efforts dwindled amid the surge in popularity of their work, which often involved humiliating and degrading imagery. Many of the popular figures exploited were quite young, ranging from Emma Watson to Billie Eilish and Millie Bobby Brown, with individuals like Greta Thunberg also being targeted.

Who stands behind this project? Mr. Deepfakes occasionally granted anonymous interviews, including one in a 2022 BBC documentary entitled ‘Deepfake Porn: Can You Be Next?’, where the ‘web developer’ behind the site, who operates under the alias ‘Deepfake,’ asserted that consent from women was unnecessary because “it’s fantasy, not reality.”

Was financial gain a driving force? DeepFakes hosted advertisements and offered paid memberships in cryptocurrencies. One forum post from 2020 mentioned a monthly profit of between $4,000 and $7,000. “There was a commercial aspect to this,” Higgins stated, elaborating that it was “a side venture, yet so much more.” This contributed to its infamy.

At one time, the site showcased over 6,000 images of Alexandria Ocasio-Cortez (AOC), allowing users to create deepfake pornography featuring her likeness. “The implication is that in today’s society, if you rise to prominence as a woman, you can expect your image to be misused for baseless exploitation,” Higgins noted. “The language utilized regarding women on that platform was particularly striking,” he added. “I had to adjust the tone in the online report to avoid sounding provocative, but it was emblematic of raw misogyny and hatred.”

In April of this year, law enforcement began investigating the site, believing it had provided evidence in its communications with suspects.

On May 4th, Mr. DeepFakes was taken offline. The notice issued on the site blamed “data loss” due to the withdrawal of a “key service provider.” The message concluded with an assertion that “I will not restart this operation.” Any website claiming to be the same is false, and while this domain will eventually lapse, they distanced themselves from any future use.

Mr. Deepfake has ended—but Compton suggests it could have concluded sooner. “All indicators were present,” she commented. In April 2024, the UK government detailed plans to criminalize the creation and distribution of deepfake sexual abuse content. In response, Mr. Deepfake promptly restricted access for users based in the UK (this initiative was later abandoned amidst the 2024 election campaign). “This clearly demonstrated that Mr. Deepfakes wasn’t immune to government intervention—if it posed too much risk, they weren’t willing to continue,” Compton stated.

However, deepfake pornography has grown so widespread and normalized that it no longer relies on a singular “base camp.” “The techniques and knowledge that they were proud to share have now become so common that anyone can access them via an app at the push of a button,” Compton remarked.

For those seeking more sophisticated creations, self-proclaimed experts who once frequented forums are now marketing their services. Patrizia Schlosser has firsthand knowledge of this trend. “In my investigative work, I went undercover and reached out to several forum members, requesting deepfakes of their ex-girlfriends,” Schlosser recounted. “Many people claim this phenomenon is exclusive to celebrities, but that’s not accurate. The responses were always along the lines of ‘sure…’

“Following the shutdown of Mr. DeepFakes, I received an automated response from one of them saying something akin to: ‘If you want anything created, don’t hesitate to reach out… Mr. DeepFakes may be gone, but we’re still here providing services.’

In the UK and Ireland, contact the Samaritans at freephone 116 123 or via email at jo@samaritans.org or jo@samaritans.ie. In the US, dial or text 988 Lifeline at 988 or chat via 988lifeline.org. Australian crisis support can be sought at Lifeline at 13 11 14. Find additional international helplines at: befrienders.org

In the UK, Rape Crisis offers assistance for sexual assault in England and Wales at 0808 802 9999 and in Wales at 0808 801 0302. For Scotland, the contact number is 0800 0246 991, while Northern Ireland offers help. In the United States, support is available through RAINN at 800-656-4673. In Australia, support can be found at 1800 Respect (1800 737 732). Explore further international helplines at: ibiblio.org/rcip/internl.html

quick guide

Contact us about this story






show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are encrypted end-to-end and hidden within the daily activities of the Guardian mobile applications. This ensures that observers can’t discern that you’re communicating with us, let alone the nature of your conversation.

If you haven’t downloaded the Guardian app yet, do so (iOS/Android). Go to the menu and select “Secure Messaging.”

SecureDrop, instant messaging, email, phone, and post

If you can use the Tor network securely without being monitored, you can communicate and share documents with us through the SecureDrop platform.

Lastly, our guide at theguardian.com/tips outlines several secure communication methods, along with their respective advantages and disadvantages.


Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Survey Reveals 1 in 4 People Unconcerned About Non-Consensual Sexual Deepfakes

A study commissioned by law enforcement revealed that 25% of individuals either believe there is no issue with creating and sharing sexual deepfakes or feel indifferent, regardless of the subject’s consent.

In response to these findings, a senior official in law enforcement cautioned that AI is exacerbating the crisis of violence against women and girls (VAWG), with tech companies being complicit in this misconduct.

A survey involving 1,700 participants, commissioned by the Office of the Chief Scientific Adviser, found that 13% were comfortable with creating and sharing sexual or intimate deepfakes (content manipulated using AI without consent).

Additionally, 12% of respondents felt neutral about the moral and legal acceptability of creating and sharing such deepfakes.

Det. Church Supt Claire Hammond of the VAWG and National Center for Civil Protection emphasized that “distributing intimate images of someone without their consent, regardless of whether they are authentic, is a serious crime.”

Discussing the survey results, she remarked: “The rise of AI technology is accelerating the violence against women and girls globally. Tech companies bear responsibility for enabling this abuse, facilitating the creation and dissemination of harmful material with ease. Immediate action is required.”

She encouraged anyone affected by deepfakes to report them to authorities. Ms. Hammond stated: “This is a serious crime, and we are here to support you. Nobody should endure pain or shame in silence.”

Under new data laws, the creation of sexually explicit deepfakes without consent will be classified as a criminal offense.

A report from crime and justice consultancy Crest Advisory indicated that 7% of participants had been portrayed in a sexual or intimate deepfake. Of those, only 51% reported the incident to law enforcement. Among those who remained silent, common reasons included embarrassment and doubts regarding the seriousness of the crime being taken.

The data also pointed out that men under 45 were more likely to be involved in the creation and sharing of deepfakes. This demographic also tended to consume pornographic content, hold misogynistic views, and have a favorable attitude toward AI. However, the report noted that the correlation between age, gender, and such beliefs is weak, calling for more research to delve deeper into this connection.

One in 20 respondents admitted to having created a deepfake previously, while over 10% expressed willingness to do so in the future. Moreover, two-thirds reported having seen or potentially seen a deepfake.

Karian Desroches, the report’s author and head of policy and strategy at Crest Advisory, cautioned that the creation of deepfakes is “growing increasingly common as technology becomes more affordable and accessible.”

“While some deepfake content might seem innocuous, the majority is of a sexual nature and predominantly directed at women.”

“We are profoundly alarmed by our findings: a demographic of young individuals who actively consume pornography, exhibit misogynistic attitudes, and perceive no harm in creating or sharing sexual deepfakes of others without consent.”

“We are living in troubling times, and without immediate and concerted action in the digital arena, we jeopardize the futures of our daughters (and sons),” said Carrie Jane Beach, an advocate for stronger protections for deepfake abuse victims.

Moreover, she stated: “We are witnessing a generation of children growing up devoid of protections, laws, or regulations addressing this matter, leading to dire consequences of such unregulated freedom.

“Confronting this issue starts at home. To have any hope of elimination, we must prioritize education and foster open discussions every day.”

Source: www.theguardian.com

Chilling Effect: How Fear of ‘Naked’ Apps and AI Deepfakes is Driving Indian Women Away from the Internet

Gaatha Sarvaiya enjoys sharing her artistic endeavors on social media. As a law graduate from India in her early 20s, she is at the outset of her professional journey, striving to attract public interest. However, the emergence of AI-driven deepfakes poses a significant threat, making it uncertain whether the images she shares will be transformed into something inappropriate or unsettling.

“I immediately considered, ‘Okay, maybe this isn’t safe. People could take our pictures and manipulate them,'” Sarvaiya, who resides in Mumbai, expresses.

“There is certainly a chilling effect,” notes Rohini Lakshane, a gender rights and digital policy researcher based in Mysore. He too refrains from posting photos of himself online. “Given how easily it can be exploited, I remain particularly cautious.”

In recent years, India has emerged as a crucial testing ground for AI technologies, becoming the second-largest market for OpenAI with the technology being widely embraced across various professions.

However, a report released recently reveals that the growing usage of AI is generating formidable new avenues for harassment directed at women, according to data compiled by the Rati Foundation, which operates a national helpline for online abuse victims.

“Over the past three years, we’ve identified that a significant majority of AI-generated content is utilized to target women and sexual minorities,” the report, prepared by Tuttle, a company focused on curbing misinformation on social media in India, asserts.

The report highlights the increasing use of AI tools for digitally altering images and videos of women, including nudes and culturally sensitive content. While these images may be accepted in Western cultures, they are often rebuked in numerous Indian communities for their portrayal of public affection.




Indian singer Asha Bhosle (left) and journalist Rana Ayyub are victims of deepfake manipulations on social media. Photo: Getty

The findings indicated that approximately 10% of the numerous cases documented by the helpline involve such altered images. “AI significantly simplifies the creation of realistic-looking content,” the report notes.

There was a notable case where an Indian woman’s likeness was manipulated by an AI tool in a public location. Bollywood singer Asha Bhosle‘s image and voice were replicated using AI and distributed on YouTube. Journalist Rana Ayyub faced a campaign targeting her personal information last year, with deepfake sexual images appearing of her on social media.

These instances sparked widespread societal discussions, with some public figures like Bhosle asserting that they have successfully claimed legal rights concerning their voice and image. However, the broader implications for everyday women like Sarvaiya, who increasingly fear engaging online, are less frequently discussed.

“When individuals encounter online harassment, they often self-censor or become less active online as a direct consequence,” explains Tarunima Prabhakar, co-founder of Tattle. Her organization conducted focus group research for two years across India to gauge the societal impacts of digital abuse.

“The predominant emotion we identified is one of fatigue,” she remarks. “This fatigue often leads them to withdraw entirely from online platforms.”

In recent years, Sarvaiya and her peers have monitored high-profile deepfake abuse cases, including those of Ayyub and Bollywood actress Rashmika Mandanna. “It’s a bit frightening for women here,” she admits.

Currently, Sarvaiya is reluctant to share anything on social media and has opted to keep her Instagram account private. She fears this measure may not suffice to safeguard her. Women are sometimes captured in public places, such as subways, with their photos potentially surfacing online later.

“It’s not as prevalent as some might believe, but luck can be unpredictable,” she observes. “A friend of a friend is actually facing threats online.”

Lakshane mentions that she often requests not to be photographed at events where she speaks. Despite her precautions, she is mentally preparing for the possibility that a deepfake image or video of her could emerge. In the app, her profile image is an illustration of herself, rather than a photo.

“Women with a public platform, an online presence, and those who express political opinions face a significant risk of image misuse,” she highlights.

Skip past newsletter promotions

Rati’s report details how AI applications, such as “nudification” and nudity apps designed to remove clothing from images, have normalized behaviors that were once seen as extreme. In one reported case, a woman approached the helpline after her photo, originally submitted for a loan application, was misused for extortion.

“When she declined to continue payments, her uploaded photo was digitally altered with the nudify app and superimposed onto a pornographic image,” the report details.

This altered image, accompanied by her phone number, was circulated on WhatsApp, resulting in a flood of sexually explicit calls and messages from strangers. The woman expressed to the helpline that she felt “humiliated and socially stigmatized, as though I had ‘become involved in something sordid’.”




A fake video allegedly featuring Indian National Congress leader Rahul Gandhi and Finance Minister Nirmala Sitharaman promoting a financial scheme. Photo: DAU Secretariat

In India, similar to many regions globally, deepfakes exist within a legal gray area. Although certain statutes may prohibit them, Rati’s report highlights existing laws in India that could apply to online harassment and intimidation, enabling women to report AI deepfakes as well.

“However, the process is often lengthy,” Sarvaiya shares, emphasizing that India’s legal framework is not adequately prepared to address issues surrounding AI deepfakes. “There is a significant amount of bureaucracy involved in seeking justice for what has occurred.”

A significant part of the problem lies with the platforms through which such images are disseminated, including YouTube, Meta, X, Instagram, and WhatsApp. Indian law enforcement agencies describe the process of compelling these companies to eliminate abusive content as “often opaque, resource-draining, inconsistent, and ineffective,” according to a report published by Equality Now, an organization advocating for women’s rights.

Meanwhile, Apple and Meta have recently responded accordingly. Rati’s report uncovers multiple instances where these platforms inadequately addressed online abuse, thereby exacerbating the spread of the nudify app.

Although WhatsApp did respond in the extortion scenario, the action was deemed “insufficient” since the altered images had already proliferated across the internet, Rati indicated. In another instance, an Instagram creator in India was targeted by a troll who shared nude clips, yet Instagram only reacted after “persistent efforts” and with a “delayed and inadequate” response.


The report indicates that victims reporting harassment on these platforms often go unheard, prompting them to reach out to helplines. Furthermore, even when accounts disseminating abusive material are removed, such content tends to resurface, a phenomenon Rati describes as “content recidivism.”

“One persistent characteristic of AI abuse is its tendency to proliferate: it is easily produced, broadly shared, and repeated multiple times,” Rati states. Confronting this issue “will necessitate much greater transparency and data accessibility from the platforms themselves.”

Source: www.theguardian.com

Bryan Cranston Appreciates OpenAI’s Efforts to Combat Sora 2 Deepfakes

Bryan Cranston expressed his “gratitude” to OpenAI for addressing deepfakes of him on its generative AI video platform Sora 2. This action follows instances where users managed to create his voice and likeness without his permission.

The Breaking Bad actor has voiced concerns to actors’ union Sag Aftra after Sora 2 users generated his likeness during the platform’s recent launch. On October 11th, the LA Times reported that in one instance, “a synthetic Michael Jackson takes a selfie video using an image of Breaking Bad star Bryan Cranston.”


To appear in Sora 2, living individuals must provide explicit consent or opt-in. Statements following the release from OpenAI confirmed it has implemented “measures to block depictions of public figures” and established “guardrails to ensure audio and visual likenesses are used with consent.”

However, upon Sora 2’s launch, several articles emerged, including those from the Wall Street Journal, Hollywood Reporter, and LA Times, which reported that OpenAI instructed several talent agencies that if they didn’t want their clients’ or copyrighted material to be featured in Sora 2, they needed to opt-out instead of opt-in, causing an uproar in Hollywood.

OpenAI contests these claims and told the LA Times its goal has always been to allow public figures to control how their likenesses are utilized.

On Monday, Cranston released a statement via Sag Aftra thanking OpenAI for “enhancing guardrails” to prevent users from generating unauthorized portraits of himself.

“I was very concerned, not only for myself but for all performers whose work and identities could be misappropriated,” Cranston commented. “We are grateful for OpenAI’s enhanced policies and guardrails and hope that OpenAI and all companies involved in this endeavor will respect our personal and professional rights to control the reproduction of our voices and likenesses.”

Hollywood’s top two agencies, Creative Artists Agency (CAA) and United Talent Agency (UTA), which represents Cranston, have repeatedly highlighted the potential dangers Sora 2 and similar generative AI platforms pose to clients and their careers.

Nevertheless, on Monday, UTA and CAA released a joint statement alongside OpenAI, Sag Aftra, and the Talent Agents Association, declaring that what transpired with Cranston was inappropriate and that they would collaborate to ensure the actor’s “right to determine how and whether he can be simulated.”


“While OpenAI has maintained from the start that consent is required for the use of voice and likeness, the company has expressed regret over these unintended generations. OpenAI has reinforced its guardrails concerning the replication of voice and likeness without opt-in,” according to the statement.

Actor Sean Astin, the new chair of SAG Aftra, cautioned that Cranston is “one of many performers whose voices and likenesses are at risk of mass appropriation through reproduction technology.”

“Bryan did the right thing by contacting his union and professional representatives to address this issue. We now have a favorable outcome in this case. We are pleased that OpenAI is committed to implementing an opt-in protocol, which enables all artists to decide whether they wish to participate in the exploitation of their voice and likeness using AI,” Astin remarked.

“To put it simply, opt-in protocols are the only ethical approach, and the NO FAKES law enhances our safety,” he continued. The Anti-Counterfeiting Act is under consideration in Congress and aims to prohibit the production and distribution of AI-generated replicas of any individual without their consent.

OpenAI has openly supported the No FAKES law, with CEO Sam Altman stating the company is “firmly dedicated to shielding performers from the misuse of their voices and likenesses.”

Sora 2 permits users to generate “historical figures,” which can be broadly defined as both well-known and deceased individuals. However, OpenAI has recently acknowledged that representatives of “recently deceased” celebrities can request for their likeness to be blocked from Sora 2.

Earlier in the month, OpenAI announced its partnership with the Martin Luther King Jr. Foundation to halt the capability of depicting King in Sora 2 at their request as they “strengthened guardrails around historical figures.”

Recently, Zelda Williams, the daughter of the late actor Robin Williams, pleaded with people to “stop” sending her AI videos of her father, while Kelly Carlin, the daughter of the late comedian George Carlin, characterized her father’s AI videos as “overwhelming and depressing.”

Legal experts speculate that generative AI platforms could enable the use of deceased historical figures to ascertain what is legally permissible.

Source: www.theguardian.com

Denmark Addresses Deepfakes by Granting Copyright to Individuals for Their Likeness and Functions

The Danish government is taking action to curb the creation and distribution of AI-generated deepfakes by revising copyright laws, ensuring that individuals hold rights over their own bodies, facial features, and voices.

On Thursday, Danish officials announced they would strengthen protections against digital imitation of personal identities, marking what they believe to be the first such law in Europe.

With support from a broad coalition across political parties, the Ministry of Culture is set to propose amendments to the existing law for consultation before the summer break, with the intention of submitting the changes in the fall.

Deepfake technology is described as an exceedingly realistic digital representation of an individual, including their appearance and voice.

Danish Minister of Culture, Jacob Engel Schmidt, expressed his hopes that the proposed legislation will convey a “clear message” to Parliament.

He stated to the Guardian: “We collectively send a clear message that everyone has the right to their body, their voice, and their facial features.”

He continued: “Humans can exploit digital duplication techniques for various malicious purposes. I will not accept that.”

The initiative reportedly enjoys support from 9 out of 10 MPs, reflecting rapid advancements in AI technology which have made it simpler than ever to create convincing fake images, videos, or sounds that mimic others.

If passed, the changes to Danish copyright law would allow citizens to request the removal of content from online platforms that is shared without their consent.

Additionally, the law would regulate “realistic and digitally generated imitations” of artistic performances without consent, with violations potentially leading to compensation for affected individuals.

The government has clarified that the new regulations will not interfere with parody and satire, which will still be allowed.

Skip past newsletter promotions

“Certainly, this is a new foundation for us being dismantled, and we are prepared to take further actions if platforms do not comply,” Engel Schmidt remarked.

Other European nations are looking to follow Denmark’s example. He plans to utilize Denmark’s upcoming EU presidency to share the initiative with his fellow European leaders.

Should tech platforms fail to comply with the new law, they may face “significant fines,” which could escalate to a matter for the European Commission. “This is why I believe high-tech platforms will take this very seriously,” he added.

Source: www.theguardian.com

Deepfakes Are Harder to Spot: Now They Even Have a Heartbeat

Deepfake technology—a method for digitally altering a person’s face or body to impersonate someone else—is advancing at an alarming rate.

This development is discussed in a recent study published in the journal Frontiers of Imaging, which facilitates the creation of some of the most cutting-edge deepfake detectors. These detectors analyze a consistent pattern of blood flow across the face, which has proven to be an unreliable method, complicating the search for harmful content.

Deepfakes are typically generated from “driving videos,” which utilize real footage that artificial intelligence modifies to completely alter a person’s representation in the video.

Not all applications of this technology are harmful; for instance, smartphone apps can age your face or transform you into a cartoon character, showcasing the same underlying techniques for innocent fun.

However, at their most malicious, deepfakes can be used to create non-consensual explicit content, disseminate false information, and unjustly implicate innocent individuals.

Experts caution that deepfakes of figures like Donald Trump could spread misinformation, undermining public opinion and trust in genuine media – Photo credit: Getty

In this study, researchers utilized cutting-edge deepfake detectors based on medical imaging methods.

Remote Photoplethysmography (RPPP) measures heartbeats by detecting minute variations in the blood flow beneath the skin, similar to pulse oximeters used in healthcare settings.

The accuracy of the detector is remarkable, with only a 2-3 beats per minute variance when compared to electrocardiogram (ECG) records.

It was previously believed that deepfakes couldn’t accurately replicate these subtle indicators enough to fool RPPP-based detectors, but that assumption has proven incorrect.

“If the driving video features a real person, this information can now be transferred to deepfake videos,” stated Professor Peter Eisert, a co-author of the research, in an interview with BBC Science Focus. “I think that’s the trajectory of all deepfake detectors. As deepfakes evolve, detectors that were once effective may soon become ineffective.”

During testing, the team found that the latest deepfake videos often displayed a remarkably realistic heartbeat, even when deliberately included.

Future deepfakes may convincingly depict actions or statements that individuals never made, potentially leading a large segment of the public to believe them unquestioningly – Source: Getty

Does this mean we are doomed to never trust online videos again? Not necessarily.

The Eisert team is optimistic that their new detection approach will prove effective. Rather than simply measuring overall pulse rates, future detectors may track detailed blood flow dynamics across the face.

“As the heart beats, blood circulates through the vessels and into the face,” Eisert explained. “This flow is then distributed throughout the facial region, and the movement has a slight time delay that can be detected in genuine footage.”

Ultimately, however, Eisert is skeptical about winning the battle solely with deepfake detection. Instead, he advocates for the use of “digital fingerprints” (encrypted evidence that video content remains untampered) as a more sustainable solution.

“I fear there will come a time when deepfakes are incredibly difficult to detect,” Eisert remarked. “I personally believe that focusing on technologies that verify the authenticity of footage is more vital than just distinguishing between genuine and fake content.”

About our experts

Peter Isert is the head of the Vision & Imaging Technologies Department and chair of visual computing at Humboldt University in Germany. A professor of visual computing, he has published works in over 200 conferences and journals, and also serves as an associate editor for the Journal of Image and Video Processing while sitting on the editorial committee for the Journal of Visual Communication and Image Representation.

Read more:

Source: www.sciencefocus.com

Exploring the Dark World of Sexual Deepfakes: Women Fighting Back against Fake Representations

IIt started with an anonymous email. It read, “That's true. I'm sorry to have to contact you.” Below that word were three links to internet forums. “HUGE trigger warning…they contain vile photoshopped images of you.”

Jody (not her real name) froze. The 27-year-old from Cambridgeshire has had problems in the past with her photos stolen to set up dating profiles and social media accounts. She called the police, but was told there was nothing they could do and pushed it to the back of her mind.

However, I couldn't ignore this email that arrived on March 10, 2021. She clicked on the link. “It was like time stood still,” she said. “I remember screaming so loud. I just completely broke down.”

Forum, an alternative porn website, has hundreds of photos of her alone, on holiday and with friends and housemates, alongside a caption labeling them as 'sluts'. The comments included calling her a “slut” and “prostitute,” asking people to rate her, and asking her what kind of fantasies she had. they will.

The person who posted the photo also shared the invitation with other members of the forum. It involved using artificial intelligence to create sexually explicit “deepfakes,” digitally altered content, using fully clothed photos of Jodi taken from her private Instagram.

“I've never done anything like this before but I love seeing her being fake…happy to chat and show more of her too…:D,” they wrote. Ta. In response, users posted hundreds of composite images and videos of the woman's body and Jodi's face. One posted an image of her wearing high school girl clothes and being raped by a teacher in a classroom. Others showed her full “nude”. “I was having sex in every room,” she said. “The shock and devastation still haunts me.”

The now-deleted fake images show that a growing number of synthetic, sexually explicit photos and videos are being created, traded and sold across social media apps, private messages and gaming platforms in the UK and around the world. Masu. As well as adult forums and porn sites.




Inside the helpline office. Photo: Jim Wileman/Observer

Last week, the government announced a “crackdown” on blatant deepfakes, expanding current laws that make it a criminal offense not only to share images, but also to create them without consent, which will be illegal from January 2024. I promised. Someone making them for you – is not going to be covered. The government will also ask whether the crime was consensual (campaigners say it must be) or whether the victim can prove that the perpetrator had malicious intent. I haven't confirmed whether it is necessary or not yet.

At the Revenge Porn Helpline's headquarters in a business park on the outskirts of Exeter, senior practitioner Kate Worthington, 28, says stronger laws with no loopholes are desperately needed.

Launched in 2015, the helpline is a dedicated service for victims of intimate image abuse, part-funded by the Home Office. Deepfake incidents are at an all-time high, with reports of synthetic image abuse increasing by 400% since 2017. However, it remains small compared to overall intimate image abuse. There were 50 incidents last year, accounting for about 1% of the total. caseload. The main reason is that it's vastly underreported, Worthington says. “Victims often don't know their images are being shared.”

The researchers found that many perpetrators of deepfake image abuse appear to be motivated by “collector culture.” “A lot of times it's not with the intention of the person knowing,” Worthington said. “Buyed, sold, exchanged, traded for sexual gratification or for status. If you are finding this content and sharing it alongside your Snap handle, Insta handle, or LinkedIn profile. , you may receive glory.'' Many are created using the “Nude'' app. In March, a charity that runs a revenge porn helpline reported 29 such services to Apple, which removed them.

There have also been cases where composite images have been used to directly threaten or humiliate people. The helpline has heard cases of boys creating fake incestuous images of female relatives. A man addicted to porn creates a composite photo of his partner engaging in non-consensual sex in real life. Stories of people who were photographed at the gym and deepfake videos made to make it look like they were having sex. Most, but not all, of those targeted are women. Approximately 72% of the deepfake incidents identified by the helpline involved women. The oldest was in his 70s.

There have also been cases where Muslim women have been targeted with deepfake images of themselves wearing revealing clothing or without their hijabs.

Regardless of intent, the impact is often extreme. “Many of these photos are so realistic that your coworkers, neighbors, and grandma won't be able to tell the difference,” says Worthington.




Kate Worthington, Senior Helpline Practitioner. Photo: Jim Wileman/Observer

The Revenge Porn Helpline helps people remove abusive images. Amanda Dashwood, 30, who has worked at the helpline for two years, says this is usually a caller's priority. “It says, 'Oh my God, help me. I need to delete this before people see it,'” she says.

She and her colleagues on the helpline team, eight women, most under 30, have a variety of tools at their disposal. If the victim knows where the content was posted, the team will issue a takedown request directly to the platform. Some people ignore the request completely. However, this helpline has partnered with most of the major helplines, from Instagram and Snapchat to Pornhub and OnlyFans, and has a successful removal rate of 90%.

If the victim doesn't know where the content was posted, or suspects it's being shared more widely, they can send a selfie to be run through facial recognition technology (with their consent) or vice versa. Ask them to use image search. tool. Although this tool is not foolproof, it can detect material being shared on the open web.

The team can also advise you on steps to stop your content from being posted online again. They plan to direct people to a service called StopNCII. The tool was created by online safety charity SWGFL, which also runs a revenge porn helpline, with funding from Meta.

Users can upload real or synthetic photos, and the technology creates a unique hash and shares it with partner platforms such as Facebook, Instagram, TikTok, Snapchat, Pornhub, and Reddit (but not X or Discord). If someone tries to upload that image, it will be automatically blocked. As of December, 1 million images had been hashed and 24,000 uploads were proactively blocked.

Skip past newsletter promotions



Alex Wolff was found guilty of a derogatory nature. I'm posting images, not soliciting them. Photo: Handout

Some people call the police, but responses vary widely depending on the force used. Victims who try to report fraudulent use of composite images are told that police cannot cooperate with edited images or that prosecution is not in the public interest.

Helpline manager Sophie Mortimer recalls another incident in which police said: “No, that's not you. It's not you.” It’s someone who looks like you,” and refused to investigate. “I feel like police sometimes look for reasons not to pursue these types of cases,” Mortimer said. “We know it's difficult, but that doesn't negate the real harm that's being caused to people.”

In November, Sam Miller, assistant chief constable and director of the violence against women and girls strategy at the National Police Chiefs' Council, told a parliamentary inquiry into intimate image abuse that police lacked a “deep understanding of violent behavior”. I'm worried,” he said. Discrepancies in laws and precedents. “Yesterday, one victim told me that out of the 450 victims of deepfake images she has spoken to, only two have had a positive experience with law enforcement,” she said. Ta.

For Jodi, it is clear that there is a need to raise awareness of the misuse of deepfakes, not only among law enforcement but also the general public.

After being alerted to her deepfake, she spent hours scrolling through posts trying to piece together what happened.

She noticed that they were not shared by strangers, but by her close friends alex wolf, a Cambridge University graduate and former BBC Young Composer of the Year. He had posted a photo of her with a cut out of him. “I knew I hadn't posted that photo on Instagram and only sent it to him. That's when the penny dropped.”


www.theguardian.com

Astronomy techniques employed by scientists to uncover deepfakes

According to a team of astronomers from the University of Hull, spotting a deepfake is as simple as looking for stars in the eyes. They propose that AI-generated fakes can be identified by examining human eyes in a similar manner to studying photos of galaxies. This means that if the reflections in a person’s eye match, then the image is likely of a real human. If not, it is likely a deepfake.



In this image, the person on the left (Scarlett Johansson) is real and the one on the right is generated by AI. Below their faces are painted eyeballs. The reflections in the eyeballs match in the real person but are inaccurate (from a physical standpoint) in the fake one. Image credit: Adejumoke Owolabi / CC BY 4.0.

“The eye reflections match up for real people but are incorrect (from a physics standpoint) for fake people,” said Prof Kevin Pimblett, from the University of Hull.

Professor Pimblett and his colleagues analysed the light reflections of the human eye in real and AI-generated images.

They then quantified the reflections using a method commonly used in astronomy to check for consistency between the reflections in the left and right eyes.

In fake images, the reflections in both eyes are often inconsistent, while in real images the reflections in both eyes are usually the same.

“To measure the shape of a galaxy we analyse whether it has a compact centre, whether it has symmetry and how smooth it is – we analyse the distribution of light,” Professor Pimblett said.

“We automatically detect the reflections and run their morphological features through CAS (density, asymmetry, smoothness) Gini Coefficient. This is to compare the similarities between the left and right eyeballs.”

“Our findings suggest that there are some differences between the two types of deepfakes.”

The Gini coefficient is typically used to measure how light in an image of a galaxy is distributed from pixel to pixel.

This measurement is done by ordering the pixels that make up an image of a galaxy in order of increasing flux, and comparing the result with what would be expected from a perfectly uniform flux distribution.

A Gini value of 0 is a galaxy whose light is evenly distributed across all pixels in the image, and a Gini value of 1 is a galaxy whose light is all concentrated in one pixel.

The astronomers also tested the CAS parameter, a tool originally developed by astronomers to measure the distribution of a galaxy’s light to determine its morphology, but found it to be useless for predicting false eyes.

“It’s important to note that this is not a silver bullet for detecting fake images,” Professor Pimblett said.

“There are false positives and false negatives, and it doesn’t detect everything.”

“But this method provides a foundation, a plan of attack, in the arms race to detect deepfakes.”

The researchers Their Work July 15 Royal Astronomical Society National Astronomy Meeting 2024 (NAM 2024) At the University of Hull.

_____

Kevin Pimblett othersDetecting deepfakes using astronomy techniques. 2024

Source: www.sci.news

Producing sexually explicit deepfake images is a crime in the UK | Deepfakes

The Ministry of Justice has declared that the creation of sexually explicit “deepfake” images will soon be considered a criminal offense under new legislation.

Those found guilty of producing such images without consent could face a criminal record, an unlimited fine, and possible imprisonment if these images are distributed widely.

The ministry stipulates that creating a deepfake image will be punishable, irrespective of the creator’s intentions for sharing it. Last year’s online safety laws already criminalize the dissemination of intimate deepfakes, made easier by advancements in artificial intelligence technology.

The offense is anticipated to be added to the Criminal Justice Bill currently under parliamentary review. Minister Laura Farris affirmed that the creation of deepfake sexual content is unacceptable under any circumstances.

“This reprehensible act of degrading and dehumanizing individuals, particularly women, will not be tolerated. The potential repercussions of widespread sharing of such material can be devastating. This government is unwavering in its stance against it.”

Yvette Cooper, the Shadow Home Secretary, voiced support for the new law, stating: “It is imperative to criminalize the production of deepfake pornography. Imposing someone’s image onto explicit content violates their autonomy and privacy, posing significant harm and must be condemned.

Law enforcement must be equipped with the necessary training and resources to enforce these laws rigorously and dissuade offenders from acting with impunity,” added Cooper.

Deborah Joseph, editor-in-chief of Glamor UK, lauded the proposed amendments, citing a survey revealing that 91% of readers perceive deepfake technology as a threat to women’s safety. Personal accounts from victims emphasized the severe impact of this activity.

“While this marks a crucial initial step, there remains a considerable journey ahead for ensuring women feel completely safeguarded from this atrocious practice,” asserted Joseph.

Source: www.theguardian.com

Family brings battle against deepfake nude images to Washington | Deepfakes

Francesa Mani returned home from school in suburban New Jersey last October and shared shocking news with her mother, Dorota.

At Westfield High School, a 14-year-old girl and her friends were targeted with abuse through the distribution of fake nude images created using artificial intelligence.

Dorota, aware of the power of this technology, was surprised by how easily the images were generated.

She expressed her disbelief, stating, “With just a single image, I didn’t anticipate how quickly this could happen. It’s a risk for anyone at the simple click of a button.”

An investigation by The Guardian’s Black Box podcast series revealed the origins and operators of an app called ClothOff, which was used to create the explicit images at Westfield High School.

Francesca and Dorota decided to take action after feeling dissatisfied with the school board’s response to the incident. They began advocating for new legislation at both the state and federal levels to hold creators of non-consensual, sexually explicit deepfakes accountable.

The growing number of cases like the one at Westfield High School has highlighted the gaps in existing laws and the urgent need for stronger protections, especially for minors.

NCMEC is collaborating with the Mani family to investigate the further spread of the images generated at the school.

While the school district initiated an investigation and offered counseling to affected students, the lack of criminal repercussions for the perpetrators due to current laws is a major concern for the victims’ families.

ClothOff denied involvement in the incident and suggested that a competing app may have been responsible.

Francesca and Dorota’s efforts have led to the introduction of bills in Congress to criminalize the sharing of AI-generated images without consent and provide victims with legal recourse.

Despite bipartisan support for these bills, progress has been slow due to other pressing issues in government, but efforts to address the misuse of AI technology continue at both the state and federal levels.

A bipartisan push to create deterrents against the creation and dissemination of deepfakes is gaining momentum as more states consider legislation to address the issue.

Incidents similar to the one at Westfield High School have occurred across the country, highlighting the urgent need for comprehensive laws to combat the misuse of AI technology.

Francesca and Dorota, along with other affected families, are committed to ensuring accountability for those responsible for creating and distributing deepfake images.

Their advocacy has drawn attention to the need for stronger legal protections against AI-generated deepfakes, emphasizing the importance of preventing further harm to vulnerable individuals.

Source: www.theguardian.com

James Cleverley warns that Britain’s enemies could utilize AI deepfakes to manipulate election results

The Home Secretary expressed concerns about criminals and “malicious actors” using AI-generated “deepfakes” to disrupt the general election.

James Cleverley, in anticipation of a meeting with social media leaders, highlighted the potential threats posed by rapid technological advancements to elections globally.

He cited examples of individuals working on behalf of countries like Russia and Iran creating numerous deepfakes (realistic fabricated images and videos) to influence democratic processes, including in the UK.

He emphasized the escalating use of deepfakes and AI-generated content to deceive and bewilder, stating that “the era of deepfakes has already begun.”

Concerned about the impact on democracy, he stressed the importance of implementing regulations, transparency, and user safeguards in the digital landscape.

The Home Secretary plans to propose collaborative efforts with tech giants like Google, Meta, Apple, and YouTube to safeguard democracy.


An estimated 2 billion people will participate in national elections worldwide in 2024, including in the UK, US, India, and other countries.

Incidents of deepfake audio imitations of politicians like Keir Starmer and Sadiq Khan, as well as misleading videos like the fake BBC News report on Rishi Sunak, have raised concerns.

In response, major tech companies have agreed to adopt precautions to prevent the misuse of AI tools for electoral interference.

Executives from various tech firms gathered at a conference to establish a framework for addressing deceptive AI-generated deepfakes that impact voters. Elon Musk’s Company X is among the signatories.

Mr. Clegg, Meta’s global president, emphasized the need for collective action to address the challenges posed by emerging technologies like deepfakes.

Source: www.theguardian.com