Exposing Degradation: The Tale of Deepfakes, the Infamous AI Porn Hub | Technology

Patrizia Schlosser’s ordeal began with a regretful call from a colleague. “I found this. Did you know?” he said, sharing a link that led her to a site called Mr. DeepFakes. Here, she was horrified to discover fabricated images portraying her in degrading scenarios, labeled “Patrizia Schlosser’s slutty FUNK whore” (sic).

“They were highly explicit and humiliating,” noted Schlosser, a journalist for North German Radio (NDR) and funk. “Their tactics were disturbing and facilitated their ability to distance themselves from the reality of the fakes. It was unsettling to think about someone scouring the internet for my pictures and compiling such content.”

Despite her previous investigations into the adult film sector, this particular site was unfamiliar. “I had never come across Mr. DeepFakes before. It’s a platform dedicated to fake pornographic videos and images. I was taken aback by its size and the extensive collection of videos featuring every celebrity I knew.” Initially, Schlosser attempted to ignore the images. “I shoved it to the back of my mind as a coping mechanism,” she explained. “Yet, even knowing it was fake, it felt unsettling. It’s not you, but it is you—depicted alongside a dog and a chain. I felt violated and confused. Finally, I resolved to act. I was upset and wanted those images removed.”

With the help of NDR’s STRG_F program, Schlosser successfully eliminated the images. She located the young man responsible for their creation, even visiting his home and conversing with his mother (the perpetrator himself remained hidden away). However, despite collaboration with Bellingcat, she could not identify the individual behind Mr. Deepfake. Ross Higgins, a member of the Bellingcat team, noted, “My background is in money laundering investigations. When we scrutinized the site’s structure, we discovered it shared an internet service provider (ISP) with a legitimate organized crime group.” These ISPs hinted at connections to the Russian mercenary group Wagner and individuals mentioned in the Panama Papers. Additionally, advertisements on the site featured apps owned by Chinese tech companies that provided the Chinese government with access to user data. “This seemed too advanced for a mere hobbyist site,” Higgins remarked.

And indeed, that was just the beginning of what unfolded.

The narrative of Mr. Deepfakes, recognized as the largest and most infamous non-consensual deepfake porn platform, aligns closely with the broader story of AI-generated adult content. The term “deepfake” itself is believed to have originated with its creator. This hub of AI pornography, which has been viewed over 2 billion times, features numerous female celebrities, politicians, European royals, and even relatives of US presidents in distressing scenarios including abductions, tortures, and extreme forms of sexual violence. Yet, the content was merely a “shop window” for the site; the actual “engine room” was the forum. Here, anyone wishing to commission a deepfake of a known person (be it a girlfriend, sister, classmate, colleague, etc.) could easily find a vendor to do so at a reasonable price. This forum also served as a “training ground,” where enthusiasts exchanged knowledge, tips, academic papers, and problem-solving techniques. One common challenge was how to create deepfakes without an extensive “dataset,” focusing instead on individuals with limited online images, like acquaintances.

Filmmaker and activist Sophie Compton invested considerable time monitoring deepfakes while developing her acclaimed 2023 documentary, Another Body (available on iPlayer). “In retrospect, that site significantly contributed to the proliferation of deepfakes,” she stated. “There was a point at which such platforms could have been prevented from existing. Deepfake porn is merely one facet of the pervasive issue we face today. Had it not been for that site, I doubt we would have witnessed such an explosion in similar content.”

The origins of Mr. Deepfakes trace back to 2017-18 when AI-generated adult content was first emerging on platforms like Reddit. An anonymous user known as “Deepfake,” recognized as a “pioneer” in AI porn, mentioned in early interviews with Vice the potential for such material. However, after Reddit prohibited deepfake pornography in early 2018, the nascent community reacted vigorously. Compton noted, “We have records of discussions from that period illustrating how the small deepfake community was in uproar.” This prompted the creation of Mr. DeepFakes, which initially operated under the domain dpfks.com. The administrator retained the same username, gathered moderators, and outlined regulations, guidelines, and comprehensive instructions for using deepfake technology.

“It’s disheartening to reflect on this chapter and realize how straightforward it could have been for authorities to curb this phenomenon,” Compton lamented. “Participants in this process believed they were invulnerable, expressing thoughts like, ‘They’ll come for us!’ and ‘They’ll never allow us this freedom!'” Yet, as they continued with minimal repercussions, their confidence grew. Moderation efforts dwindled amid the surge in popularity of their work, which often involved humiliating and degrading imagery. Many of the popular figures exploited were quite young, ranging from Emma Watson to Billie Eilish and Millie Bobby Brown, with individuals like Greta Thunberg also being targeted.

Who stands behind this project? Mr. Deepfakes occasionally granted anonymous interviews, including one in a 2022 BBC documentary entitled ‘Deepfake Porn: Can You Be Next?’, where the ‘web developer’ behind the site, who operates under the alias ‘Deepfake,’ asserted that consent from women was unnecessary because “it’s fantasy, not reality.”

Was financial gain a driving force? DeepFakes hosted advertisements and offered paid memberships in cryptocurrencies. One forum post from 2020 mentioned a monthly profit of between $4,000 and $7,000. “There was a commercial aspect to this,” Higgins stated, elaborating that it was “a side venture, yet so much more.” This contributed to its infamy.

At one time, the site showcased over 6,000 images of Alexandria Ocasio-Cortez (AOC), allowing users to create deepfake pornography featuring her likeness. “The implication is that in today’s society, if you rise to prominence as a woman, you can expect your image to be misused for baseless exploitation,” Higgins noted. “The language utilized regarding women on that platform was particularly striking,” he added. “I had to adjust the tone in the online report to avoid sounding provocative, but it was emblematic of raw misogyny and hatred.”

In April of this year, law enforcement began investigating the site, believing it had provided evidence in its communications with suspects.

On May 4th, Mr. DeepFakes was taken offline. The notice issued on the site blamed “data loss” due to the withdrawal of a “key service provider.” The message concluded with an assertion that “I will not restart this operation.” Any website claiming to be the same is false, and while this domain will eventually lapse, they distanced themselves from any future use.

Mr. Deepfake has ended—but Compton suggests it could have concluded sooner. “All indicators were present,” she commented. In April 2024, the UK government detailed plans to criminalize the creation and distribution of deepfake sexual abuse content. In response, Mr. Deepfake promptly restricted access for users based in the UK (this initiative was later abandoned amidst the 2024 election campaign). “This clearly demonstrated that Mr. Deepfakes wasn’t immune to government intervention—if it posed too much risk, they weren’t willing to continue,” Compton stated.

However, deepfake pornography has grown so widespread and normalized that it no longer relies on a singular “base camp.” “The techniques and knowledge that they were proud to share have now become so common that anyone can access them via an app at the push of a button,” Compton remarked.

For those seeking more sophisticated creations, self-proclaimed experts who once frequented forums are now marketing their services. Patrizia Schlosser has firsthand knowledge of this trend. “In my investigative work, I went undercover and reached out to several forum members, requesting deepfakes of their ex-girlfriends,” Schlosser recounted. “Many people claim this phenomenon is exclusive to celebrities, but that’s not accurate. The responses were always along the lines of ‘sure…’

“Following the shutdown of Mr. DeepFakes, I received an automated response from one of them saying something akin to: ‘If you want anything created, don’t hesitate to reach out… Mr. DeepFakes may be gone, but we’re still here providing services.’

In the UK and Ireland, contact the Samaritans at freephone 116 123 or via email at jo@samaritans.org or jo@samaritans.ie. In the US, dial or text 988 Lifeline at 988 or chat via 988lifeline.org. Australian crisis support can be sought at Lifeline at 13 11 14. Find additional international helplines at: befrienders.org

In the UK, Rape Crisis offers assistance for sexual assault in England and Wales at 0808 802 9999 and in Wales at 0808 801 0302. For Scotland, the contact number is 0800 0246 991, while Northern Ireland offers help. In the United States, support is available through RAINN at 800-656-4673. In Australia, support can be found at 1800 Respect (1800 737 732). Explore further international helplines at: ibiblio.org/rcip/internl.html

quick guide

Contact us about this story






show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are encrypted end-to-end and hidden within the daily activities of the Guardian mobile applications. This ensures that observers can’t discern that you’re communicating with us, let alone the nature of your conversation.

If you haven’t downloaded the Guardian app yet, do so (iOS/Android). Go to the menu and select “Secure Messaging.”

SecureDrop, instant messaging, email, phone, and post

If you can use the Tor network securely without being monitored, you can communicate and share documents with us through the SecureDrop platform.

Lastly, our guide at theguardian.com/tips outlines several secure communication methods, along with their respective advantages and disadvantages.


Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Hackers Allegedly Breach Kido Nursery Chain, Exposing Photos of 8,000 Children

Approximately 8,000 names, photos, and addresses of children were allegedly taken from the Kido Nursery chain by a group of cybercriminals.

According to the BBC, these criminals are demanding ransoms from companies operating 18 sites in London, as well as additional locations in the US, India, and China.

The hackers also accessed details about the children’s parents and caregivers, claiming they were securing notes. They reached out to several individuals by phone, employing tactics associated with the Frightor.


Kido has been approached for comment but has yet to confirm the hackers’ assertions. The company has not released an official statement regarding the incident.

A nursery employee informed the BBC that she had been made aware of the data breach.

The Metropolitan Police indicated that they were alerted on Thursday “following reports of ransomware attacks on a London-based organization,” adding that “enquiries are ongoing and remain in the initial phase within Met’s cybercrime division. No arrests have been made to date.”

A spokesperson for the Intelligence Committee office stated that “Kido International has reported the incident to us and we are currently assessing the provided information.”

Many organizations have experienced cyberattacks recently. The Cooperative reported a £80 million decline in profits due to a hacking incident in April.

Skip past newsletter promotions

Jaguar Land Rover (JLR) was unable to assemble vehicles at the start of the month following a cyberattack that compromised their computer systems.

As a result, the company had to shut down most systems used for tracking factory components, vehicles, and tools, impacting their luxury Range Rover, Discovery, and Defender SUV sales.

The company has since reopened a limited number of computer systems.

Quick Guide

Please contact Guardian Business about this story








The best public interest journalism depends on firsthand accounts from informed individuals.

If you have any insights on this topic, confidentially reach out to the business team through the following means:

Secure Messages in Guardian App

The Guardian app features a tool for sending tips about stories. All messages are encrypted and embedded within routine uses of the Guardian app, ensuring no one can detect your communication with us.

If you haven’t installed the Guardian app yet, download it (iOS/Android), navigate to the menu, scroll down, and click Secure Messaging. Choose Guardian Business when prompted about whom you wish to contact.

SecureDrop, Instant Messenger, Email, Phone, and Mail

If you can safely access the TOR network without being detected, you can send messages and documents to the Guardian through our SecureDrop platform.

Lastly, our guide at theguardian.com/tips provides various secure communication methods while discussing their respective advantages and disadvantages.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

The Pitt: Exposing the Overcrowding Crisis in the Emergency Room

The emergency department waiting room was packed as always, with patients crammed closely into hard metal chairs, as if they had been sitting for hours. Only those needing immediate care, like a heart attack, were seen right away.

One man had enough and slammed the glass window in front of the receptionist before storming out. He took a smoking break and verbally attacked the nurse, questioning her hard work as he left.

Although not a real event, this scene was portrayed in the Max series “The Pitt,” which airs its season finale on Thursday, set in a fictional Pittsburgh Hospital emergency room. The underlying theme of overwhelming overcrowding is a universal issue in this country, and not an easy one to solve.

“Ed is shaking and overwhelmed.” The American Emergency Clinic reported See Emergency Department, 2023.

“This system is at its breaking point,” stated Dr. Benjamin S. Abela, chairman of the Icahn School of Medicine at Mount Sinai in New York.

“The Pitt” depicts the daily struggle of emergency room doctors, nurses, medical students, custodians, and staff dealing with a variety of medical issues, from heart attacks and strokes to overdoses and severe burns. The show neatly resolves many of the complex issues in its 15 episodes, but reflects the real-life problems faced by medical systems operating beyond capacity.

The jammed waiting room and patients waiting for days in emergency rooms highlight a critical issue – overcrowding – labeled a “National Public Health Crisis” by the American University of Emergency Medicine.I’ll call boarding

Medical supplies in hallways and patients seen in hallways due to lack of available space further emphasize the strain on the system.

Instances of violence between patients with mental health issues and nurses are depicted in “The Pitt,” echoing the reality of the situation seen in emergency rooms nationwide.

Dr. Abela emphasizes that the show portrays a system on the brink of collapse, reflecting what is happening in emergency rooms across the country.

The complex nature of the issue, as explained by Dr. Ezekiel J. Emmanuel from the Health Transformation Institute at the University of Pennsylvania, Perelman School of Medicine, points to the lack of a simple solution and the challenges posed by limited resources.

Financial constraints, patient flow issues, and capacity limitations in nursing homes contribute to the ongoing crisis in emergency departments.

Dr. Jeremy S. Faust from Brigham and Women’s Hospital’s Emergency Medical Office highlights scheduling challenges for patient discharges and the role of primary care in alleviating overcrowding in emergency rooms.

In the real world and on screen in the “pits,” patients often end up in emergency rooms for issues that could be addressed by primary care physicians, emphasizing the need for better access to primary care services.

Dr. Emmanuel underlines the difficulties in finding and accessing primary care, leading many to seek immediate help in emergency rooms rather than wait for appointments.

The trend of seeking immediate solutions contributes to the ongoing problem of overcrowding in emergency rooms despite efforts to expand facilities.

Dr. Faust recalls how opening a new emergency room with more beds led to an influx of patients, demonstrating that expanding facilities alone does not solve the issue of overcrowding.

Source: www.nytimes.com

Utilizing Chatbots to Combat Phone Scammers: Exposing Real Criminals and Supporting True Victims

A scammer calls and asks for a passcode, leaving Malcolm, an older man with a British accent, confused.

“What business are you talking about?” Malcolm asks.

Again, I received a scam call.

This time, Ibrahim, cooperative and polite with an Egyptian accent, answered the phone. “To be honest, I can’t really remember if I’ve bought anything recently,” he told the scammer. “Maybe one of my kids did,” Ibrahim continued, “but it’s not your fault, is it?”

Scammers are real, but Malcolm and Ibrahim aren’t. They’re just two of the conversational artificial intelligence bots created by Professor Dali Kaafar and his team, who founded Apate, named after the Greek goddess of deception, through his research at Macquarie University.

Apetto’s goal is to use conversational AI to eradicate phone fraud worldwide, leveraging existing systems that allow telecommunications companies to redirect calls when they identify them as coming from scammers.

Kafal was inspired to strike back at phone scammers after he told a “dad joke” to the caller in front of his two children as they enjoyed a picnic in the sun. His pointless chatter kept the scammer on the line. “The kids had a good laugh,” Kafal says. “I thought the goal was to trick them so they would waste their time and not talk to other people.

“In other words, we’re scamming the scammers.”

The next day, he called in his team from the university’s Cybersecurity Hub. He figured there had to be a better way than his dad joke approach — and something smarter than a popular existing technology: Lennybot.

Before Malcolm and Ibrahim, there was Lenny.

Lenny is a rambling, elderly Australian man who loves to chatter away. He’s a chatbot designed to poke fun at telemarketers.

Lenny’s anonymous creator posted this on Reddit. They say they created the chatbot as “a telemarketer’s worst nightmare… a lonely old man who wants to chat and is proud of his family, but can’t focus on the telemarketer’s purpose.” The act of tying up scammers is called scamming.

Apate bot to the rescue

Australian telecommunications companies have blocked almost 2 billion scam calls since December 2020.

Thanks to $720,000 in funding from the Office of National Intelligence, the “victim chatbots” could now number in the hundreds of thousands, too many to name individually. The bots are of different “ages,” speak English with different accents, and exhibit a range of emotions, personalities, and reactions; sometimes naive, sometimes skeptical, sometimes rude.

Once a carrier detects a fraudster and routes them to a system like Apate, bots go to work to keep them busy. The bots try different strategies and learn what works to keep fraudsters on the phone line longer. Through successes and failures, the machines fine-tune their patterns.

This way, they can collect information such as the length of calls, the times of day when scammers are likely to call, what information they are after, and the tactics they are using, and extract the information to detect new scams.

Kafal hopes Apate will disrupt the call fraud business model, which is often run by large, multi-billion-dollar criminal organizations. The next step will be to use the information it collects to proactively warn of scams and take action in real time.

“We’re talking about real criminals who are making our lives miserable,” Kafal said. “We’re talking about the risks to real people.”

“Sometimes people lose their life savings, have difficulty living due to debt, and sometimes suffer mental trauma. [by] shame.”

Richard Buckland, a cybercrime professor at the University of New South Wales, said techniques like Apate were different to other types of fraud, some of which were amateurish or amounted to vigilante fraud.

“Usually fraud is problematic,” he said, “but this is sophisticated.”

He says mistakes can happen when individuals go it alone.

“You can go after the wrong person,” he said. Many scams are perpetrated by people in near-slave-like conditions, “and they’re not bad people,” he said.

“[And] “Some of the fraudsters are going even further and trying to enforce the law themselves, either by hacking back or engaging with them. That’s a problem.”

But the Apate model appears to be using AI for good, as a kind of “honeypot” to lure criminals and learn from them, he says.

Buckland warns that false positives happen everywhere, so telcos need a high level of confidence that only fraudsters are directing AI bots, and that criminal organisations could use anti-fraud AI technology to train their own systems.

“The same techniques used to deceive scammers can be used to deceive people,” he says.

Scamwatch is run by the National Anti-Fraud Centre (NASC) under the auspices of the Australian Competition and Consumer Commission (ACCC), and an ACCC spokesman said scammers often impersonate well-known organisations and use fake legitimate phone numbers.

“Criminals create a sense of urgency to encourage their targeted victims to act quickly,” the spokesperson said, “often trying to convince victims to give up personal or bank details or provide remote access to their computers.”

“Criminals may already have detailed information about their targeted victims, such as names and addresses, obtained or purchased illegally through data breaches, phishing or other scams.”

This week Scamwatch had to issue a warning about what appears to be a meth scam.

Scammers claiming to be NASC officials were calling innocent people and saying they were under investigation for allegedly engaging in fraud.

The NASC says people should hang up the phone immediately if they are contacted by a scammer. The spokesperson said the company is aware of “technology initiatives to productize fraud prevention using AI voice personas,” including Apate, and is interested in considering evaluating the platform.

Meanwhile, there is a thriving community of scammers online, and Lenny remains one of their cult heroes.

One memorable recording shows Lenny asking a caller to wait a moment. Ducks start quacking in the background. “Sorry,” Lenny says. “What were you talking about?”

“Are you near the computer?” the caller asks impatiently. “Do you have a computer? Can you come by the computer right now?”

Lenny continues until the conman loses his mind. “Shut up. Shut up. Shut up.”

“Can we wait a little longer?” Lennie asked, as the ducks began quacking again.

Source: www.theguardian.com

Lawsuit filed against Grindr in London for exposing users’ HIV status to advertising firms

Grindr is potentially facing lawsuits from numerous users who allege that the dating app shared extremely confidential personal data with advertising firms, including disclosing their HIV status in some instances.

Law firm Austin Hayes is preparing to sue the app’s American owners in London’s High Court, claiming a breach of UK data protection laws.

The firm asserts that thousands of Grindr users in the UK had their information misused. They state that 670 individuals have already signed the claim, with “thousands more” showing interest in joining.

Grinder has stated it will vigorously respond to these allegations, pointing out that they are based on an inaccurate evaluation of past policies.

Established in 2009 to facilitate interactions among gay men, Grindr is currently the largest dating app worldwide for gay, bisexual, transgender, and queer individuals, boasting millions of users.

The lawsuit against Grindr in the High Court centers on claims of personal data sharing with two advertising companies. It also suggests that these companies may have further sold the data to other entities.

New users may not be eligible to take part, as the claims against Grindr primarily cover the period before April 3, 2018, and between May 25, 2018, and April 7, 2020. Grindr updated its consent process in April 2020.

Los Angeles-headquartered Grindr ceased passing on users’ HIV status to third parties in April 2018 following a report by Norwegian researchers uncovering data sharing with two firms. In 2021, Norway’s data protection authority imposed a NOK 65 million fine on Grindr for violating data protection laws.

Grinder appealed the decision from Norway.

The Norwegian ruling does not specifically address the alleged sharing of a user’s HIV status, recognizing that a user registered on Grindr is likely associated with the gay or bisexual community, making such data sensitive.

Chaya Hanumanjee, managing director at Austin Hayes leading the case, remarked, “Our clients suffer greatly when their highly sensitive data is shared without consent, leading to fear, embarrassment, and anxiety.”

Skip past newsletter promotions

“Grindr is dedicated to compensating those impacted by the data breach and ensuring all users can safely utilize the app without fear of their data being shared with third parties,” Hanumanjee added.

The law firm believes that affected users might be entitled to significant damages but did not disclose details.

A spokesperson from Grindr stated, “We prioritize safeguarding your data and adhering to all relevant privacy regulations, including in the UK. Our global privacy program demonstrates our commitment to privacy, and we will vigorously address this claim.”

Source: www.theguardian.com

Amazon’s Tactics to Combat Union Efforts: Exposing the Lawbreakers

Amazon is facing an anti-government campaign that could lead to increased unionization among its employees, with allegations of unethical behavior surfacing. Workers attempting to organize within the warehouse have reported instances of fear tactics, misinformation, and unlawful retaliation by the tech giant.


Nearly two years ago, workers in Staten Island, New York, made history by forming America’s first warehouse union. As the Amazon union gathered momentum nationwide, the company worked to avoid a similar outcome at other locations.

Nanette Plasencia, a long-time employee at Amazon’s ONT8 fulfillment center in Moreno Valley, California, expressed concerns about the company’s tactics. She mentioned that Amazon is willing to go to great lengths, even if it means breaking the law, to prevent unionization.

Documentation shared with the Guardian revealed how Amazon pushed back against union efforts within ONT8 by disseminating anti-union messages. Employees were subjected to propaganda on TV screens warning them about the negative impact of unions on their paychecks.

These actions have led to allegations of unfair labor practices against Amazon, with the unionization process at the company facing legal challenges from both sides. The situation is currently pending a lawsuit and verdict from the National Labor Relations Board.

Despite facing opposition from the company, Amazon workers in Moreno Valley attempted to hold a union vote in October 2022. However, the election petition was withdrawn following alleged violations of labor laws by Amazon administrators.

The case is set to be heard by an administrative law judge in August, with Amazon denying any wrongdoing at ONT8. They have dismissed the majority of the charges brought against them and are eager to prove their innocence as the legal proceedings unfold.

Source: www.theguardian.com