Exposing Degradation: The Tale of Deepfakes, the Infamous AI Porn Hub | Technology

Patrizia Schlosser’s ordeal began with a regretful call from a colleague. “I found this. Did you know?” he said, sharing a link that led her to a site called Mr. DeepFakes. Here, she was horrified to discover fabricated images portraying her in degrading scenarios, labeled “Patrizia Schlosser’s slutty FUNK whore” (sic).

“They were highly explicit and humiliating,” noted Schlosser, a journalist for North German Radio (NDR) and funk. “Their tactics were disturbing and facilitated their ability to distance themselves from the reality of the fakes. It was unsettling to think about someone scouring the internet for my pictures and compiling such content.”

Despite her previous investigations into the adult film sector, this particular site was unfamiliar. “I had never come across Mr. DeepFakes before. It’s a platform dedicated to fake pornographic videos and images. I was taken aback by its size and the extensive collection of videos featuring every celebrity I knew.” Initially, Schlosser attempted to ignore the images. “I shoved it to the back of my mind as a coping mechanism,” she explained. “Yet, even knowing it was fake, it felt unsettling. It’s not you, but it is you—depicted alongside a dog and a chain. I felt violated and confused. Finally, I resolved to act. I was upset and wanted those images removed.”

With the help of NDR’s STRG_F program, Schlosser successfully eliminated the images. She located the young man responsible for their creation, even visiting his home and conversing with his mother (the perpetrator himself remained hidden away). However, despite collaboration with Bellingcat, she could not identify the individual behind Mr. Deepfake. Ross Higgins, a member of the Bellingcat team, noted, “My background is in money laundering investigations. When we scrutinized the site’s structure, we discovered it shared an internet service provider (ISP) with a legitimate organized crime group.” These ISPs hinted at connections to the Russian mercenary group Wagner and individuals mentioned in the Panama Papers. Additionally, advertisements on the site featured apps owned by Chinese tech companies that provided the Chinese government with access to user data. “This seemed too advanced for a mere hobbyist site,” Higgins remarked.

And indeed, that was just the beginning of what unfolded.

The narrative of Mr. Deepfakes, recognized as the largest and most infamous non-consensual deepfake porn platform, aligns closely with the broader story of AI-generated adult content. The term “deepfake” itself is believed to have originated with its creator. This hub of AI pornography, which has been viewed over 2 billion times, features numerous female celebrities, politicians, European royals, and even relatives of US presidents in distressing scenarios including abductions, tortures, and extreme forms of sexual violence. Yet, the content was merely a “shop window” for the site; the actual “engine room” was the forum. Here, anyone wishing to commission a deepfake of a known person (be it a girlfriend, sister, classmate, colleague, etc.) could easily find a vendor to do so at a reasonable price. This forum also served as a “training ground,” where enthusiasts exchanged knowledge, tips, academic papers, and problem-solving techniques. One common challenge was how to create deepfakes without an extensive “dataset,” focusing instead on individuals with limited online images, like acquaintances.

Filmmaker and activist Sophie Compton invested considerable time monitoring deepfakes while developing her acclaimed 2023 documentary, Another Body (available on iPlayer). “In retrospect, that site significantly contributed to the proliferation of deepfakes,” she stated. “There was a point at which such platforms could have been prevented from existing. Deepfake porn is merely one facet of the pervasive issue we face today. Had it not been for that site, I doubt we would have witnessed such an explosion in similar content.”

The origins of Mr. Deepfakes trace back to 2017-18 when AI-generated adult content was first emerging on platforms like Reddit. An anonymous user known as “Deepfake,” recognized as a “pioneer” in AI porn, mentioned in early interviews with Vice the potential for such material. However, after Reddit prohibited deepfake pornography in early 2018, the nascent community reacted vigorously. Compton noted, “We have records of discussions from that period illustrating how the small deepfake community was in uproar.” This prompted the creation of Mr. DeepFakes, which initially operated under the domain dpfks.com. The administrator retained the same username, gathered moderators, and outlined regulations, guidelines, and comprehensive instructions for using deepfake technology.

“It’s disheartening to reflect on this chapter and realize how straightforward it could have been for authorities to curb this phenomenon,” Compton lamented. “Participants in this process believed they were invulnerable, expressing thoughts like, ‘They’ll come for us!’ and ‘They’ll never allow us this freedom!'” Yet, as they continued with minimal repercussions, their confidence grew. Moderation efforts dwindled amid the surge in popularity of their work, which often involved humiliating and degrading imagery. Many of the popular figures exploited were quite young, ranging from Emma Watson to Billie Eilish and Millie Bobby Brown, with individuals like Greta Thunberg also being targeted.

Who stands behind this project? Mr. Deepfakes occasionally granted anonymous interviews, including one in a 2022 BBC documentary entitled ‘Deepfake Porn: Can You Be Next?’, where the ‘web developer’ behind the site, who operates under the alias ‘Deepfake,’ asserted that consent from women was unnecessary because “it’s fantasy, not reality.”

Was financial gain a driving force? DeepFakes hosted advertisements and offered paid memberships in cryptocurrencies. One forum post from 2020 mentioned a monthly profit of between $4,000 and $7,000. “There was a commercial aspect to this,” Higgins stated, elaborating that it was “a side venture, yet so much more.” This contributed to its infamy.

At one time, the site showcased over 6,000 images of Alexandria Ocasio-Cortez (AOC), allowing users to create deepfake pornography featuring her likeness. “The implication is that in today’s society, if you rise to prominence as a woman, you can expect your image to be misused for baseless exploitation,” Higgins noted. “The language utilized regarding women on that platform was particularly striking,” he added. “I had to adjust the tone in the online report to avoid sounding provocative, but it was emblematic of raw misogyny and hatred.”

In April of this year, law enforcement began investigating the site, believing it had provided evidence in its communications with suspects.

On May 4th, Mr. DeepFakes was taken offline. The notice issued on the site blamed “data loss” due to the withdrawal of a “key service provider.” The message concluded with an assertion that “I will not restart this operation.” Any website claiming to be the same is false, and while this domain will eventually lapse, they distanced themselves from any future use.

Mr. Deepfake has ended—but Compton suggests it could have concluded sooner. “All indicators were present,” she commented. In April 2024, the UK government detailed plans to criminalize the creation and distribution of deepfake sexual abuse content. In response, Mr. Deepfake promptly restricted access for users based in the UK (this initiative was later abandoned amidst the 2024 election campaign). “This clearly demonstrated that Mr. Deepfakes wasn’t immune to government intervention—if it posed too much risk, they weren’t willing to continue,” Compton stated.

However, deepfake pornography has grown so widespread and normalized that it no longer relies on a singular “base camp.” “The techniques and knowledge that they were proud to share have now become so common that anyone can access them via an app at the push of a button,” Compton remarked.

For those seeking more sophisticated creations, self-proclaimed experts who once frequented forums are now marketing their services. Patrizia Schlosser has firsthand knowledge of this trend. “In my investigative work, I went undercover and reached out to several forum members, requesting deepfakes of their ex-girlfriends,” Schlosser recounted. “Many people claim this phenomenon is exclusive to celebrities, but that’s not accurate. The responses were always along the lines of ‘sure…’

“Following the shutdown of Mr. DeepFakes, I received an automated response from one of them saying something akin to: ‘If you want anything created, don’t hesitate to reach out… Mr. DeepFakes may be gone, but we’re still here providing services.’

In the UK and Ireland, contact the Samaritans at freephone 116 123 or via email at jo@samaritans.org or jo@samaritans.ie. In the US, dial or text 988 Lifeline at 988 or chat via 988lifeline.org. Australian crisis support can be sought at Lifeline at 13 11 14. Find additional international helplines at: befrienders.org

In the UK, Rape Crisis offers assistance for sexual assault in England and Wales at 0808 802 9999 and in Wales at 0808 801 0302. For Scotland, the contact number is 0800 0246 991, while Northern Ireland offers help. In the United States, support is available through RAINN at 800-656-4673. In Australia, support can be found at 1800 Respect (1800 737 732). Explore further international helplines at: ibiblio.org/rcip/internl.html

quick guide

Contact us about this story






show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are encrypted end-to-end and hidden within the daily activities of the Guardian mobile applications. This ensures that observers can’t discern that you’re communicating with us, let alone the nature of your conversation.

If you haven’t downloaded the Guardian app yet, do so (iOS/Android). Go to the menu and select “Secure Messaging.”

SecureDrop, instant messaging, email, phone, and post

If you can use the Tor network securely without being monitored, you can communicate and share documents with us through the SecureDrop platform.

Lastly, our guide at theguardian.com/tips outlines several secure communication methods, along with their respective advantages and disadvantages.


Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *