Exposing Degradation: The Tale of Deepfakes, the Infamous AI Porn Hub | Technology

Patrizia Schlosser’s ordeal began with a regretful call from a colleague. “I found this. Did you know?” he said, sharing a link that led her to a site called Mr. DeepFakes. Here, she was horrified to discover fabricated images portraying her in degrading scenarios, labeled “Patrizia Schlosser’s slutty FUNK whore” (sic).

“They were highly explicit and humiliating,” noted Schlosser, a journalist for North German Radio (NDR) and funk. “Their tactics were disturbing and facilitated their ability to distance themselves from the reality of the fakes. It was unsettling to think about someone scouring the internet for my pictures and compiling such content.”

Despite her previous investigations into the adult film sector, this particular site was unfamiliar. “I had never come across Mr. DeepFakes before. It’s a platform dedicated to fake pornographic videos and images. I was taken aback by its size and the extensive collection of videos featuring every celebrity I knew.” Initially, Schlosser attempted to ignore the images. “I shoved it to the back of my mind as a coping mechanism,” she explained. “Yet, even knowing it was fake, it felt unsettling. It’s not you, but it is you—depicted alongside a dog and a chain. I felt violated and confused. Finally, I resolved to act. I was upset and wanted those images removed.”

With the help of NDR’s STRG_F program, Schlosser successfully eliminated the images. She located the young man responsible for their creation, even visiting his home and conversing with his mother (the perpetrator himself remained hidden away). However, despite collaboration with Bellingcat, she could not identify the individual behind Mr. Deepfake. Ross Higgins, a member of the Bellingcat team, noted, “My background is in money laundering investigations. When we scrutinized the site’s structure, we discovered it shared an internet service provider (ISP) with a legitimate organized crime group.” These ISPs hinted at connections to the Russian mercenary group Wagner and individuals mentioned in the Panama Papers. Additionally, advertisements on the site featured apps owned by Chinese tech companies that provided the Chinese government with access to user data. “This seemed too advanced for a mere hobbyist site,” Higgins remarked.

And indeed, that was just the beginning of what unfolded.

The narrative of Mr. Deepfakes, recognized as the largest and most infamous non-consensual deepfake porn platform, aligns closely with the broader story of AI-generated adult content. The term “deepfake” itself is believed to have originated with its creator. This hub of AI pornography, which has been viewed over 2 billion times, features numerous female celebrities, politicians, European royals, and even relatives of US presidents in distressing scenarios including abductions, tortures, and extreme forms of sexual violence. Yet, the content was merely a “shop window” for the site; the actual “engine room” was the forum. Here, anyone wishing to commission a deepfake of a known person (be it a girlfriend, sister, classmate, colleague, etc.) could easily find a vendor to do so at a reasonable price. This forum also served as a “training ground,” where enthusiasts exchanged knowledge, tips, academic papers, and problem-solving techniques. One common challenge was how to create deepfakes without an extensive “dataset,” focusing instead on individuals with limited online images, like acquaintances.

Filmmaker and activist Sophie Compton invested considerable time monitoring deepfakes while developing her acclaimed 2023 documentary, Another Body (available on iPlayer). “In retrospect, that site significantly contributed to the proliferation of deepfakes,” she stated. “There was a point at which such platforms could have been prevented from existing. Deepfake porn is merely one facet of the pervasive issue we face today. Had it not been for that site, I doubt we would have witnessed such an explosion in similar content.”

The origins of Mr. Deepfakes trace back to 2017-18 when AI-generated adult content was first emerging on platforms like Reddit. An anonymous user known as “Deepfake,” recognized as a “pioneer” in AI porn, mentioned in early interviews with Vice the potential for such material. However, after Reddit prohibited deepfake pornography in early 2018, the nascent community reacted vigorously. Compton noted, “We have records of discussions from that period illustrating how the small deepfake community was in uproar.” This prompted the creation of Mr. DeepFakes, which initially operated under the domain dpfks.com. The administrator retained the same username, gathered moderators, and outlined regulations, guidelines, and comprehensive instructions for using deepfake technology.

“It’s disheartening to reflect on this chapter and realize how straightforward it could have been for authorities to curb this phenomenon,” Compton lamented. “Participants in this process believed they were invulnerable, expressing thoughts like, ‘They’ll come for us!’ and ‘They’ll never allow us this freedom!'” Yet, as they continued with minimal repercussions, their confidence grew. Moderation efforts dwindled amid the surge in popularity of their work, which often involved humiliating and degrading imagery. Many of the popular figures exploited were quite young, ranging from Emma Watson to Billie Eilish and Millie Bobby Brown, with individuals like Greta Thunberg also being targeted.

Who stands behind this project? Mr. Deepfakes occasionally granted anonymous interviews, including one in a 2022 BBC documentary entitled ‘Deepfake Porn: Can You Be Next?’, where the ‘web developer’ behind the site, who operates under the alias ‘Deepfake,’ asserted that consent from women was unnecessary because “it’s fantasy, not reality.”

Was financial gain a driving force? DeepFakes hosted advertisements and offered paid memberships in cryptocurrencies. One forum post from 2020 mentioned a monthly profit of between $4,000 and $7,000. “There was a commercial aspect to this,” Higgins stated, elaborating that it was “a side venture, yet so much more.” This contributed to its infamy.

At one time, the site showcased over 6,000 images of Alexandria Ocasio-Cortez (AOC), allowing users to create deepfake pornography featuring her likeness. “The implication is that in today’s society, if you rise to prominence as a woman, you can expect your image to be misused for baseless exploitation,” Higgins noted. “The language utilized regarding women on that platform was particularly striking,” he added. “I had to adjust the tone in the online report to avoid sounding provocative, but it was emblematic of raw misogyny and hatred.”

In April of this year, law enforcement began investigating the site, believing it had provided evidence in its communications with suspects.

On May 4th, Mr. DeepFakes was taken offline. The notice issued on the site blamed “data loss” due to the withdrawal of a “key service provider.” The message concluded with an assertion that “I will not restart this operation.” Any website claiming to be the same is false, and while this domain will eventually lapse, they distanced themselves from any future use.

Mr. Deepfake has ended—but Compton suggests it could have concluded sooner. “All indicators were present,” she commented. In April 2024, the UK government detailed plans to criminalize the creation and distribution of deepfake sexual abuse content. In response, Mr. Deepfake promptly restricted access for users based in the UK (this initiative was later abandoned amidst the 2024 election campaign). “This clearly demonstrated that Mr. Deepfakes wasn’t immune to government intervention—if it posed too much risk, they weren’t willing to continue,” Compton stated.

However, deepfake pornography has grown so widespread and normalized that it no longer relies on a singular “base camp.” “The techniques and knowledge that they were proud to share have now become so common that anyone can access them via an app at the push of a button,” Compton remarked.

For those seeking more sophisticated creations, self-proclaimed experts who once frequented forums are now marketing their services. Patrizia Schlosser has firsthand knowledge of this trend. “In my investigative work, I went undercover and reached out to several forum members, requesting deepfakes of their ex-girlfriends,” Schlosser recounted. “Many people claim this phenomenon is exclusive to celebrities, but that’s not accurate. The responses were always along the lines of ‘sure…’

“Following the shutdown of Mr. DeepFakes, I received an automated response from one of them saying something akin to: ‘If you want anything created, don’t hesitate to reach out… Mr. DeepFakes may be gone, but we’re still here providing services.’

In the UK and Ireland, contact the Samaritans at freephone 116 123 or via email at jo@samaritans.org or jo@samaritans.ie. In the US, dial or text 988 Lifeline at 988 or chat via 988lifeline.org. Australian crisis support can be sought at Lifeline at 13 11 14. Find additional international helplines at: befrienders.org

In the UK, Rape Crisis offers assistance for sexual assault in England and Wales at 0808 802 9999 and in Wales at 0808 801 0302. For Scotland, the contact number is 0800 0246 991, while Northern Ireland offers help. In the United States, support is available through RAINN at 800-656-4673. In Australia, support can be found at 1800 Respect (1800 737 732). Explore further international helplines at: ibiblio.org/rcip/internl.html

quick guide

Contact us about this story






show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are encrypted end-to-end and hidden within the daily activities of the Guardian mobile applications. This ensures that observers can’t discern that you’re communicating with us, let alone the nature of your conversation.

If you haven’t downloaded the Guardian app yet, do so (iOS/Android). Go to the menu and select “Secure Messaging.”

SecureDrop, instant messaging, email, phone, and post

If you can use the Tor network securely without being monitored, you can communicate and share documents with us through the SecureDrop platform.

Lastly, our guide at theguardian.com/tips outlines several secure communication methods, along with their respective advantages and disadvantages.


Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Porn Company Fined £1 Million by Ofcom for Inadequate Age Verification

A pornography company managing 18 adult websites has incurred a £1 million fine from the regulatory body Ofcom for inadequate age verification measures, marking the largest penalty issued thus far under the UK’s Online Safety Act.

The Belize-based AVS Group has also faced an additional fine of £50,000 for not adhering to information request protocols.

This incident represents the third time the communications regulator has enforced fines on a company regarding the UK’s Online Safety Act, which implemented stringent age verification rules in July.

AVS has implemented what it describes as an age verification system; however, regulatory assessments have deemed it to be ineffective.

The company finds itself facing a £1,000 penalty for each day Ofcom considers the circumstance valid within a 72-hour window of the age check being introduced. This leads to a cumulative fine of £300 daily until they comply with the information request or for as much as 60 days.

Since the implementation of the new regulations, Ofcom has initiated investigations into 92 online services, giving priority to sites attracting millions of monthly visitors, considering the potential harm these sites pose.

Oliver Griffiths, Ofcom’s group director for online safety, shared with BBC Radio 4’s Today program that the fines are part of a “broader shift” focused on platform accountability, which includes the “large scale” rollout of age verification on adult sites to combat child sexual abuse material.

Mr. Griffiths noted that more than 90 websites, inclusive of 83 adult sites, are still under scrutiny for possible infringements of the law, and further penalties are expected.

Ofcom also indicated that a significant social media platform, unnamed, could face formal repercussions should it fail to enhance its compliance measures. This platform has reportedly provided insufficient risk assessments needed to evaluate the potential appearance of illegal content, such as scams and unlawful pornography, to its users.

“We reached back out to inform them a re-evaluation was necessary,” Griffiths stated. “Should they fail to treat this matter with due seriousness again, we will promptly escalate to enforcement.”

Furthermore, Ofcom disclosed its review of major anonymous platforms’ capabilities to eliminate illegal terrorist and hate-driven content, including anti-Semitic and anti-Muslim material, with possible enforcement actions on the horizon.

The Online Safety Act brings forth several new regulations designed to shield children and adults from harmful content, with violations potentially costing up to £18 million or 10% of annual UK revenue, or even business closures.

More than half of the UK’s 100 most frequented adult services have instituted age verification since the rule changes in July, alongside social media sites like X, TikTok, and Reddit, according to the regulator. Mr. Griffiths acknowledged a rapid increase in the usage of virtual private networks (VPNs), which enable users to bypass regional restrictions on certain sites, noting that this number had peaked at between 600,000 to over 1 million users when the age verification was assessed but has since declined “significantly” below 1 million.

“There has been a slight uptick in VPN usage; however, several elements were not sold wholesale. Interesting research …indicates that children do not seem to constitute a large proportion,” he noted.

Technology Secretary Liz Kendall stated: “Since the Online Safety Act was enacted, platforms are indeed beginning to accept responsibility for safeguarding children and eradicating illegal and hateful content.”

“Ofcom has the full backing of the Government and is leveraging every authority at its disposal to ensure a service prioritizing user safety. Ensuring the online safety of children remains a top priority for this Government and for me personally.”

Source: www.theguardian.com

AI-Generated “Poverty Porn” Images Exploited by Aid Agencies

Images generated by AI depicting extreme poverty, children, and survivors of sexual violence are increasingly populating stock photo platforms and are being utilized by prominent health NGOs, according to global health specialists who raise alarms over a shift towards what they term “poverty porn.”

“They are widespread,” shares Noah Arnold from Fair Picture, a Switzerland-based organization dedicated to fostering ethical imagery in global development. “Some organizations are actively employing AI visuals, while others are experimenting cautiously.”

Arseni Alenichev, researcher states, “The images replicate the visual lexicon of poverty: children with empty plates, cracked earth, and other typical visuals,” as noted by researchers at the Institute of Tropical Medicine in Antwerp specializing in global health imagery.

Alenichev has amassed over 100 AI-generated images depicting extreme poverty intended for individuals and NGOs to use in social media initiatives against hunger and sexual violence. The visuals he provided to the Guardian reflect scenes that perpetuate exaggerated stereotypes, such as an African girl dressed in a wedding gown with tears on her cheeks. In a comment article published Thursday, he argues that these images represent “poverty porn 2.0”.

While quantifying the prevalence of AI-generated images is challenging, Alenichev and his team believe their usage is rising, driven by concerns regarding consent and financial constraints. Arnold mentioned that budget cuts to NGO funding in the U.S. exacerbate the situation.

“It’s evident organizations are beginning to consider synthetic images in place of real photographs because they are more affordable and eliminate the need for consent or other complications,” Alenichev explained.

AI-generated visuals depicting extreme poverty are now appearing abundantly on popular stock photo websites, including Adobe Stock Photography and Freepik when searching for terms like “poverty.” Many of these images carry captions such as “Realistic child in refugee camp,” and “Children in Asia swim in garbage-filled rivers.” Adobe’s licensing fees for such images are approximately £60.

“They are deeply racist. They should never have been published as they reflect the worst stereotypes about Africa, India, and more,” Alenichev asserted.

Freepik’s CEO Joaquín Abela stated that the accountability for usage of these extreme images falls upon media consumers rather than platforms like his. He pointed out that the AI-generated stock photos come from the platform’s global user base, and if an image is purchased by a Freepik customer, that user community earns a licensing fee.

He added that Freepik is attempting to mitigate bias present elsewhere in its photo library by “introducing diversity” and striving for gender balance in images of professionals like lawyers and CEOs featured on the site.

However, he acknowledged limitations in what can be achieved on his platform. “It’s akin to drying the ocean. We make efforts, but the reality is that if consumers worldwide demand images in a specific manner, there’s little anyone can do.”




A screen capture of an AI-generated image of “poverty” on a stock photo site, raising concerns about biased depictions and stereotypes.
Illustration: Freepik

Historically, prominent charities have integrated AI-generated images into their global health communication strategies. In 2023, the Dutch branch of the British charity Plan International will launch a video campaign against child marriage featuring AI-generated images including that of a girl with black eyes, an elderly man, and a pregnant teenager.

Last year, the United Nations released a video that showcased the AI-generated testimony of a Burundian woman who was raped and left for dead in 1993 amidst the civil war. This video was removed after The Guardian reached out to the UN for a statement.

“The video in question was produced over a year ago utilizing rapidly advancing tools and was taken down because we perceived it to demonstrate inappropriate use of AI, potentially jeopardizing the integrity of the information by blending real footage with nearly authentic, artificially generated content,” remarked a UN peacekeeping spokesperson.

“The United Nations remains dedicated to supporting survivors of conflict-related sexual violence, including through innovative and creative advocacy.”

Arnold commented that the rising reliance on these AI images is rooted in a long-standing discussion concerning ethical imagery and respectful storytelling concerning poverty and violence. “It’s likely simpler to procure an off-the-shelf AI visual, as it’s not tied to any real individual.”


Kate Kaldle, a communications consultant for NGOs, expressed her disgust at the images, recalling previous conversations about the concept of “poverty porn” in the sector.

“It’s unfortunate that the struggle for more ethical representation of those experiencing poverty has become unrealistic,” she lamented.

Generative AI tools have long been known to reproduce—and at times exaggerate—widely-held societal biases. Alenichev mentioned that this issue could be intensified by the presence of biased images in global health communications, as such images can circulate across the internet and ultimately be used to train the next wave of AI models, which has been shown to exacerbate prejudice.

A spokesperson for Plan International noted that as of this year, the NGO has “adopted guidance advising against the use of AI to portray individual children,” and that their 2023 campaign employed AI-generated images to maintain “the privacy and dignity of real girls.”

Adobe opted not to comment.

Source: www.theguardian.com

Man Fined $340,000 for Creating Deepfake Porn of a Prominent Australian Woman in Landmark Case

The individual who shared deepfake pornographic images of a well-known Australian figure has been heavily fined in the initial legal case for sending a “strong message.”

On Friday, a federal court mandated that Anthony Rotondo, also known as Antonio, pay a penalty of $343,500 along with legal costs after the online regulator, Esafiti Commissioner, filed a lawsuit against him nearly two years ago.

Rotondo was responsible for posting the images on a website named Mrdeepfakes.com.

Sign up: AU Breaking NewsEmail

Regulators maintained that substantial civil penalties were essential to underscore the severity of violations against online safety laws and the harm inflicted upon women who are victims of image-based abuse.

“This action sends a strong message regarding the repercussions for individuals who engage in image-based abuse through deepfakes,” the watchdog stated late Friday.

“Esafety is profoundly concerned about the creation and distribution of non-consensual explicit deepfake images, as these can lead to significant psychological and emotional distress.”

Commissioner Julie Inman Grant filed a case against Rotondo in federal court in 2023 due to his non-compliance with a deletion notice, which was ineffective as he is not an Australian resident.

“If you believe you’re in the right, I’ll secure an arrest warrant,” he said.

Following the court’s order for Rotondo to remove the images and refrain from sharing them, he sent them via email to 50 addresses, including the Esafety Commissioner and various media outlets.

Commissioners initiated federal court proceedings shortly after police ascertained that Rotondo had traveled from the Philippines to the Gold Coast.

Skip past newsletter promotions

He eventually acknowledged his actions as trivial.

The images were removed after Rotondo voluntarily provided passwords and necessary details to the Commissioner’s officers.




Source: www.theguardian.com

Ofcom’s Call to Action: Ensuring Porn Sites Comply with Online Safety Regulations

TThe Ofcom regulators, equipped with clipboards, navigated the exhibition space during the International Adult Industry Conference in Prague over the weekend, aiming to motivate 1,700 attendees to adhere to the UK’s newly implemented online safety regulations.

“Be truthful,” a regulator addressed a crowd of porn site operators and staff during a midday seminar discussing the age verification requirements that were set in motion in July as part of the legislative framework for accessing adult content. “Be transparent. If your efforts fall short, include them in the risk assessment.”

Attendees enjoying complimentary champagne offered by conference sponsors posed some uneasy inquiries. What steps should a company take if it lacks the funds for age verification? How hefty are the penalties? Can a site circumvent regulations by blocking traffic from the UK? Were Ofcom officials aware that some site owners might be trying to undermine their competitors by reporting them for non-compliance?




Presentation by Ofcom at the Wyndham Diplomat Hotel in Prague, Czech Republic. Photographer: Bjoern Steinz/Björn Steinz/Panos Pictures

“We are here to assist you,” another Ofcom regulator explained to an audience of around 50 men and seven women. “It’s a challenge. There’s a wealth of information to absorb, but we exist to assist members of the adult industry in achieving compliance.”

Seven weeks following the activation of the online safety law, Ofcom officials seek to portray the adult industry’s response to this legislation in a positive light. They noted that most of the leading 10 and top 100 adult websites have either implemented age verification checks or restricted access within the UK. Platforms like X and Reddit, which feature pornographic content, also provide age verification guarantees. In August, views surged to 7.5 million on the top five age verification websites, up from 1 million in June.

Regulators intend to frame the introduction of age verification on July 27 as a pivotal moment for the industry, dubbing it “AV Day,” when children’s access to British porn would be unequivocally obstructed. The situation, however, is more nuanced.




Ofcom screen at the Prague conference. Photographer: Bjoern Steinz/Björn Steinz/Panos Pictures

In the days following the law’s enactment, there was a notable spike in VPN downloads, enabling users to disguise their locations and bypass age verification prompts.

“This development was quite unfortunate,” commented Mike Stabile, director of public policy for the Free Speech Coalition, representing American adult entertainment. He mentioned moving to a location that did not comply and opted for a non-compliant site. “VPN usage has surged. People are not compliant. Traffic is redirecting to piracy sites. I don’t think Ofcom will regard this outcome as what they intended.”

Corey Silverstein, an American attorney representing several companies in the adult industry and who has encountered numerous failed attempts at enforcing age verification laws in the U.S., noted a significant skepticism towards regulators. “While people maintain professionalism and politeness, this is not the most agreeable audience. Some display overt disdain. You can sense the discomfort in participating in an event like this.”

Despite this, he delivered a presentation for site owners, advising them to confront their aversion to regulators and collaborate with Ofcom to implement new guidelines.

“Their intent is not to harm your business. They are quite friendly. They aren’t out to eliminate you,” he stated. “As I understand it, they do not even impose financial fines. Their goal is to guide you towards compliance.”

Ofcom officials were dressed in neatly pressed white shirts, working amid the ambient sounds of steel drums, distributing A4 printed questionnaires while sponsors served cocktails and a troupe of feather-clad dancers entertained attendees.

The paper form, which allowed for anonymous responses, requested representatives to indicate whether they had adopted age verification in the UK and to discuss reasons for inaction regarding non-compliant businesses. By Saturday evening, Ofcom officials noted that an insufficient number of representatives had completed the form but remained hopeful for better participation on Sunday.

Though no fines have yet been issued under the Online Safety Act, Ofcom has initiated 12 investigations into over 60 porn services, including websites and applications.

Skip past newsletter promotions

Updates from these investigations have fueled discontent among adult site proprietors, who are also advocating for stricter regulations in the U.S. and France. Yet, there was some begrudging acknowledgment of Ofcom’s effort to engage with the industry in events and dialogues.

“In the U.S., regulatory bodies often shy away from engaging with us,” remarked Alex Kekeshi, vice president of PornHub’s brand and community. “I appreciate Ofcom’s invitation to the table. Such engagement is often overlooked in discussions on industry regulations.”

Before July 27, Ofcom established a specialized porn portfolio team consisting of six compliance officers to encourage businesses to meet regulatory standards. These team members requested anonymity due to conservative reasons but participated in similar discussions in Berlin, Amsterdam, and LA. Additionally, a larger team of over 40 staff members is focused on investigating organizations that fail to comply.

“We are acutely aware of the industry’s scale and the ease of establishing services for distributing pornographic content indiscriminately,” one regulator remarked. “We are not claiming to lead every service towards compliance; our strategy is to allocate resources where children face the highest risk of harm.” When penalties are applied, they are designed with a deterrent effect, potentially reaching up to £18 million or 10% of global revenue.

“Companies can opt not to risk being pursued by us or facing penalties. We aim to shift the incentive balance so that compliance is deemed less risky.”

Another Ofcom representative avoided commenting on the increase in VPN downloads, asserting that the law’s purpose is to prevent children from inadvertently encountering pornographic content (rather than going after those who deliberately seek it).

In their bid to comply with the new age verification requirements, site owners are also addressing the challenges posed by AI-generated content, urging users to engage of Ofcom’s attention and seeking to prevent companies like Visa and MasterCard from processing payments linked to violent and illegal content. Sites and applications featuring AI-generated pornography also fall under the scope of this legislation.

“How can we distinguish between a 15-year-old AI model and one that represents an 18- or 19-year-old within compliance frameworks?” one attendee questioned, expressing concerns about the potential for AI to inadvertently generate child sexual abuse material.

Steve Jones, who operates an AI porn site, stressed that AI systems need to be programmed to acknowledge what is deemed inappropriate. “We must ensure that depictions are not too youthful or flat. We will disallow pigtails, braces, and toys typically associated with children,” he stated. “AI lacks the ability to differentiate between youthful-looking adults and minors. It’s crucial to teach these distinctions.”

Source: www.theguardian.com

UK Online Safety Law Requires Porn Sites to Implement 5 Million Daily Age Checks | Internet Safety

Recent statistics indicate that since the implementation of age verification for pornographic websites, the UK is conducting an additional five million online age checks daily.

The Association of Age Verification Providers (AVPA) reported a significant increase in age checks across the UK since Friday, coinciding with the enforcement of mandatory age verification under the Online Safety Act.

“We are thrilled to assist you in maximizing your business potential,” remarked Iain Corby, executive director of AVPA.

In the UK, the use of virtual private networks (VPNs), which allow users to bypass restrictions on blocked sites, is rapidly increasing as they mask users’ actual locations. Four of the top five free applications in the UK Apple Download Store are VPNs, with popular provider Proton reporting an astonishing 1,800% surge in downloads.

Last week, Ofcom, the UK communications regulator, indicated it may initiate a formal inquiry into the inadequate age checks reported this week. Ofcom stated it will actively monitor compliance with age verification requirements and may investigate specific services as needed.

AVPA, the industry association representing UK age verification companies, has been assessing the checks performed on UK porn providers, which were mandated to implement “very effective” age verification by July 25th.

Companies that verified ages were instructed to report “the number of checks conducted today for a very effective age guarantee.”

While the AVPA stated it couldn’t provide a baseline for comparison, it noted that effective age verification measures are newly introduced to dedicated UK porn sites, which previously only required a confirmation check for age.

An Ofcom spokesperson said: “Until now, children could easily stumble upon pornographic and other online content without seeking it out. Age checks are essential to prevent that. We must ensure platforms are adhering to these requirements and anticipate enforcement actions against non-compliant companies.”

Ofcom stresses that service providers should not promote the use of VPNs to circumvent age management.

Penalties for breaching online safety regulations, including insufficient age verification processes, can range from 10% of global revenue to complete blockage of the site’s access in severe cases.

Age verification methods endorsed by OFCOM and utilized by AVPA members include facial age estimation, which analyses a person’s age via live photos and videos; verification through credit card providers, banks, or mobile network operators; photo ID matching, where a user’s ID is compared to a selfie; and a “digital identity wallet” containing age verification proof.

Prominent pornographic platforms, including Pornhub, the UK’s leading porn site, have pledged to adopt the stringent age verification measures mandated by the Act.

The law compels sites and applications to protect children from various harmful content, specifically material that encourages suicide, self-harm, and eating disorders. Advanced platforms must also take action to prevent the dissemination of abusive content targeting individuals with characteristics protected under equality laws, such as age, race, and gender.

Free speech advocates argue that the restrictions on child-related content have caused the classification of X-rated materials to age unnecessarily, along with several Reddit forums dedicated to discussions around alcohol abuse.

Reddit and X have been approached for their feedback.

Source: www.theguardian.com

House Approves Bills to Prohibit Revenge Porn Sharing and Sends to Trump

On Monday, the House passed a strongly bipartisan law aimed at criminalizing the non-consensual sharing of sexually explicit photos and videos that include AI-generated imagery, commonly referred to as “deepfakes.”

The legislation, which cleared with a vote of 409 to 2, is expected to be signed by President Trump soon.

Known as the Take It Down Act, this law is designed to combat what is termed “Revenge Porn.” Under its provisions, social media companies and online platforms must remove such images within two days of receiving notification.

The bill saw bipartisan support, uniting conservatives and other parties alike, and it passed the Senate unanimously in February. Trump’s endorsement, highlighted in his joint address to Congress last month, appears to have facilitated its passage.

co-sponsored by Sen. Ted Cruz, a Republican from Texas, and Democrat Amy Klobuchar from Minnesota, it marks the first significant Internet Content Act to pass Congress since 2018, following prior legislation aimed at combating online sex trafficking. While concentrating on revenge porn and deepfakes, this bill indicates a crucial step towards holding internet companies accountable, which have largely evaded governmental oversight for years.

The strong backing for the Take It Down Act underscores the growing frustration among legislators regarding social media platforms like Facebook, Instagram, and X.

Revenge porn and deepfakes affect individuals of all ages, particularly impacting teenage girls, as the proliferation of “nudification” apps allows boys to secretly create and distribute sexually explicit images of female classmates.

Rep. Maria Elvira Salazar, a Republican from Florida, introduced a parallel bill in the House and stated on Monday that this legislation aims to prevent the rampant abuse and harassment faced by young girls online.

“Using the likenesses, voices, and images of young, susceptible women to manipulate, coerce, and publicly humiliate them for amusement or revenge is utterly reprehensible,” said Salazar.

Such legislation has been frequently attempted in various states across the nation, and although South Carolina lacks specific laws against revenge porn, at least 20 other states have enacted measures addressing sexually explicit deepfakes.

The measures passed on Monday are part of ongoing bipartisan efforts by lawmakers to tackle deepfake pornography. Cruz and Klobuchar first introduced the bill last year, but it did not advance in a Republican-led House. Reintroduced this year, it gained traction after receiving endorsement from first lady Melania Trump.

Last year, Alexandria Ocasio-Cortez, a younger Democratic congresswoman from New York, proposed legislation enabling individuals depicted in sexually explicit deepfakes to sue creators and distributors of such content. This bill has not been reintroduced this year.

Legislators have recently focused on several bills aimed at safeguarding children from online sexual exploitation, bullying, and harmful algorithms. In January 2024, executives from Meta, TikTok, and other tech firms faced tough questions from lawmakers as they defended their platforms.

During these hearings, Meta CEO Mark Zuckerberg was compelled to apologize to parents who had lost children to online dangers.

However, some advocates for free speech caution that such measures could stifle legitimate expression, arguing that similar laws might inadvertently lead to the removal of both lawful and unlawful content.

“Despite the good intentions, the potentially harmful effects of the bill on constitutional rights to free speech and privacy online cannot be overlooked,” stated Becca Branham, assistant director of the Free Expression Project at the Center for Democracy Technology, a research organization.

Branham also remarked that the Take It Down Act serves as “a potentially weaponized enforcement tool that endangers meaningful progress in the battle against image-based sexual abuse.”

Source: www.nytimes.com

Prosecutors urged to step up efforts to combat revenge porn and protect victims from abusers depicted in images online

The Crown Prosecutor’s Office is planning to update its guidance on “revenge porn” crimes to ensure that explicit photos of the victim are no longer allowed to be retained.

observer reported last month that the Magistrates’ Court did not order the removal of content related to Image-based abuse cases, and prosecutors did not request such action.

An examination of court records revealed that out of 98 cases, only three resulted in deprivation orders requiring offenders to surrender their devices and delete private photos and videos.

In one case, a man was accused by a magistrate of engaging in “deeply disturbing” behavior to emotionally intimidate the victim. Despite receiving a suspended sentence, no order was issued to have him surrender his device for deletion, leaving the police without the legal authority to do so.

These findings prompted outcry from activists, demanding immediate action and stating that the failure to act left victims “living in fear.” The CPS acknowledged the need for more action to prevent these images from being used to perpetrate further crimes.

The prosecutor’s guidance on communication crimes is currently being revised. The new guidelines, released this week, urge prosecutors to utilize deprivation orders more effectively and consistently.

Prosecutors are advised to consider implementing the order early in their case strategies to strip offenders of their devices promptly and foster a more uniform approach to device confiscation from suspects from the outset.

Campaigners welcomed the change as a positive step forward.

Elena Michael from the campaign group #notyourporn commended the guidance as a helpful step but emphasized the need for further actions to be taken.

She highlighted additional areas for improvement, such as addressing issues related to evidence collection and advocating for a more comprehensive investigation process.

Furthermore, she urged clarity on whether the guidance would extend to hard drives, cloud storage, and physical devices in addition to mobile phones.

Law professor Claire McGlyn praised the updated guidance as a positive development and a step towards progress, emphasizing the importance of addressing image-based sexual abuse cases more effectively.

The Judgment Council, responsible for issuing guidance to judges and magistrates, stated that they are monitoring the government’s new intimate image law proposal and will consider necessary adjustments to guidelines once it is enacted.

Source: www.theguardian.com

Exploring the Unusual Legacy of Xbox Kinect: Ghost Hunting, Porn, and Interactive Art | Games

rBundle in 2010 with ELESHEST and XBOX 360, Kinect looked like the future, at least for a moment. Kinect, a camera that detects gestures and can reproduce on-screen in games, allowed players to control video games with their own body. It was a bit of a sense at first, and we were selling 1M units in the first 10 days. It remains The best-selling gaming peripherals ever.

However, the lack of games already dominated by the Nintendo WII, the unreliable performance and the motion control market have brought Kinect enthusiasm for it to cool down quickly. Microsoft released a new version of Kinect on the Xbox One in 2013, but only to become an embarrassing flop. Kinect Line was like that It was suddenly cancelled in 2017. The Guardian reached out to several people involved in the development of the surrounding area. Instead, people who were keen to discuss Microsoft’s Motion Featured Cameras didn’t use it in the game at all.

Theo Watson Co-founder of Design I/O a creative studio specializing in interactive installations – many of them use depth cameras including Kinect. “When Kinect came out, it was really like a dream situation,” he recalls. “There are probably more than 10 installations around the world, and now Kinects tracks people, but Kinect’s use of the game was blip.”

The assistant will show the Xbox 360 game Kinect Adventures during the 2010 media briefing. Photo: Mario Anzuoni / Reuters / Reuters

Watson turns 15 this year and speaks about Kinect with a rare joy. (“I can’t stop talking about depth cameras,” he adds. “That’s my passion.”) As part of a collaborative effort, Watson has contributed to making Microsoft’s gaming cameras open source, based on the work of Hector “Malkan” Martin. It quickly became clear that Kinect didn’t become the future of video games, as Microsoft originally wanted. Instead, it was a game changer in other ways: for artists, robot players, and… ghost hunters.

Kinect works with a structured optical system. In other words, it creates depth data by projecting an infrared dot cloud, reads the transformations of that matrix, and identifies the depth. From this data, its machine learning core was trained to “see” the human body. In games like Kinect Sports, the camera was able to convert the body to a controller. Meanwhile, for those who create interactive artwork, they cut out much of the programming and busy work needed by more basic infrared cameras.

“The best analogy is like going from black and white TV to color,” Watson says. “There was this whole extra world that was open for us.” The powerful depth camera, which was previously present, retailed for around $6,000 (£4,740), but Microsoft condensed it into a robust, lightweight device for $150 (£118).

Robotocists were also grateful that accessible sensors allowed vision and movement to create. “Previously, only plane 2D LIDAR information was available to detect obstacles and map the environment,” he said. Stereo LoveWe plan to release the latest version of our advanced depth detection camera and software soon. 2D LIDAR detects objects by projecting a laser and measuring the time it takes for light to reflect. However, Kinect can create detailed and accurate depth maps that provide more information. what Obstacles are a way to navigate it. “Before sensors like Kinect, Lucetti says, “Grass clusters are not perceived unlike rocks, and they have all the consequences associated with navigation.”

This type of depth camera now drives many autonomous robotics. Perseverance of the 2020s Mars Rover’s Autonav System and Apple’s facial identification technology. (Apple purchased Primesense, the Israeli company behind Kinect’s structured optical system in 2013.)

NASA’s Mars Patience Rover in 2020. Photo: NASA/UPI/REX/SHUTTERSTOCK

Kinect’s technology was quickly digged into by open source sensors and more advanced motion sensing devices that are free to use. However, since Microsoft stopped manufacturing its Kinect line in 2017, the small cameras have enjoyed something vibrant and not vibrant, not completely violated after death. have Looking at South Korea’s unarmed zone And we worked on adjusting the topography and patient. With a CT scanner; It has been revealed that it is being used in a baggage hall at the airport. Newer Liberty International Airport terminal security camera c (United Airlines declined to comment on this), and Gamify Training for the US Military. Attached to drones, rescue robots, easy uses found in porn.

“I don’t know if anyone has a solid vision of what interactive sex Kinect is involved with,” says Kyle Machulis, founder of buttplug.io Another member of the OpenKinect team. The camera is deployed primarily as a complex controller for 3D sex games, and plays “a futuristic marketing role more than anything in actual consumer use,” says Matulis. In that role, it was a success. It attracted a surge in attention from Microsoft to somehow ban porn containing Kinect. It was an interesting experiment, but it turns out that the addition of novelty devices is not a turn-on for many porn users. What’s more, as Matulis says, when the camera malfunctions, “it looks pretty scary.”

Skip past newsletter promotions

There is less concern for ghost hunters who thrive on the ambiguity of aging technology and rebranded Kinect as a “SLS” (structured light sensor) camera. They unfold a body tracking to find numbers that are invisible to the naked eye. Ghost Hunters are excited by the Kinect’s habit of “seeing” a body that is not actually there, believing that the numbers on these skeletal sticks are unequivocal expressions of spirits.

The paranormal investigator industry doesn’t care much about false positives as long as those false positives are perceived as paranormal. Freelance Science Performance They have a show dedicated to researching ghost hunting technology. “It’s very normal for a ghost hunter to use an infrared camera and torch to photograph himself in the dark. He bathes the scene with an IR light, using sensors that measure a specific pattern of infrared dots,” he says. Given that Kinect is specifically designed to recognize the human body with the data it receives, Kinect becomes a stranger. It didn’t do it Pick up unusual numbers in this context.

Kinect has a living poem among people looking for evidence of life after death. On the right hand, the camera is still strong. Theo Watson points to me Connected Worldan exhibition held in the New York Science Hall of Fame since 2015. Of the many Kinect devices that enhance installation, they had to be replaced within 10 years of opening. One of them was a few weeks ago. Watson began stockpiling devices when Microsoft stopped production.

“Half of the projects on our website do not exist without Kinect,” he says. “If this camera had another decade, it still wouldn’t be running out of anything related to it.”

Source: www.theguardian.com

Unveiling the Hidden World of a Porn Addict: ‘I Take Extreme Caution in Concealing My Actions’

TOny, who is in his 50s, recently did a quick calculation of how much time he’s spent watching porn in his life. “The results were horrifying,” he says. Eight years. “It’s hard to even think about. The frustration is intense.”

Tony saw his first “hardcore” movie on VHS in the 1980s, when he was 12 years old. It was in his 20s that he first got online, which turned his habit into a “full-blown addiction.” For the past 30 years, he’s managed to maintain a double life: he works in care, has friendships and relationships with men and women. But there’s one side of him he keeps completely secret.

“So far, I’ve only told three people about this: two therapists, and now you,” he says. “I’ve kept it a complete secret from everyone I’ve ever known. I’m very careful to cover my tracks, even in relationships. My lack of interest in sex with my partner might be the only thing that makes her wonder.”

Tony has tried many times to stop watching porn but has never been able to go more than a month without it. He’s tried cutting down, banned masturbation, blocked porn sites, and tried to quit completely. But “the addict’s brain is very cunning and manipulative,” he says. He also tried therapy, but found it difficult to keep up with the costs long-term.

Still, Tony is grateful for one thing: he was young before the internet. “At least I had a normal youth. Parties, shows, adventures with friends. I had a girlfriend. I had a sex life. A guy like me doesn’t have that chance now.”

All statistics on pornography use in the UK and globally have skyrocketed due to the widespread use of mobile phones: in May 2023 alone, around 13.8 million people, a third of all internet-using adults, viewed pornography online.
According to Ofcom
Of these, two-thirds were male. Although pornography companies do not report (or acknowledge) statistics on underage viewers, on average, children in the UK first see pornography at age 12. In a recent study, the Children’s Commissioner for England said:
Much of what young people see is violent and extreme.

… (content continues)

Source: www.theguardian.com

EU Identifies Three Porn Sites Subject to Stricter Online Content Regulations

Age verification technology could be heading to adult content sites after these three sites were added to the list of platforms subject to the most stringent level of regulation under the European Union’s Digital Services Act (DSA).

Back in April, the EU announced an initial list of 17 so-called Very Large Online Platforms (VLOPs) and two Very Large Online Search Engines (VLOSEs) designated under the DSA. The initial list did not include adult content sites. The addition of the three platforms specified today changes that.

According to Wikipedia — which, ironically, was already named VLOP in the first wave or commission designation — XVideos and Pornhub are the world’s No. 1 and No. 2 most-visited adult content sites. Stripchat, on the other hand, is an adult webcam platform that live streams nude performers.

None of the three services currently require visitors to undergo a strict age check (i.e. age verification rather than self-declaration) before accessing their content, but all three services As a result, this area is subject to change.

As the EU points out in its report, pan-EU regulations require designated (large) platforms with an average monthly user base of more than 45 million people in the region to have a number of restrictions, including obligations to protect minors. It imposes additional obligations. press release Today — writing [emphasis ours]: “VLOPs must design services, including interfaces, recommendation systems, and terms of use, to address and prevent risks to child welfare. Relax measures to protect children’s rights and prevent minors from accessing pornographic content online (such as age verification tools)

The European Commission, which is responsible for overseeing VLOPs’ compliance with the DSA, today reiterated that creating a safer online environment for children is an enforcement priority.

Other DSA obligations for VLOPs include:They are required to produce a risk assessment report on the “specific systemic risks” that their services may pose in relation to the dissemination of illegal content and content that threatens fundamental rights. It must first be shared with the committee and then published.

and to address the risks associated with the online dissemination of illegal content, such as child sexual abuse material (CSAM), and content that affects fundamental rights, such as human dignity and the right to private life in the absence of consent. , mitigation measures must also be applied. Sharing intimate content or deepfake pornography online.

“These measures may include, among other things, adaptations to terms of use, interfaces, moderation processes, algorithms, etc.,” the Commission notes.

The three adult platforms designated as VLOPs have four months to bring their services into compliance with additional DSA requirements. That means we need time until late April to make the necessary changes, such as rolling out age verification technology.

“The European Commission’s services will closely monitor compliance with the DSA obligations by these platforms, in particular with regard to measures to protect minors from harmful content and to combat the spread of illegal content,” the EU said. , further added: Please work closely with your newly designated platforms to ensure these are addressed appropriately. ”

The DSA also contains a set of more broadly applicable general obligations that apply not only to small-scale digital services but also to VLOPs. For example, ensuring that systems are designed to ensure high levels of privacy, safety and child protection. Promptly notify law enforcement authorities if they become aware of information that gives rise to suspicion of a criminal offense involving a threat to the life or safety of a person, including in cases of child sexual abuse, and compliance with these requirements; Notice deadline will start slightly earlier on February 17, 2024.

The DSA applies across the EU and EEA (European Economic Area), but post-Brexit this region will not include the UK. However, this autumn the UK government passed its own Online Safety Act (OSA), establishing communications regulator Ofcom as the country’s internet content watchdog and introducing a system of harsher penalties for breaches than the EU’s (OSA fines). (can amount to up to 10%) of global annual sales versus up to 6% based on the EU DSA).

UK law also focuses on child protection. And recent Ofcom guidance for porn sites, aimed at helping them comply with new legal obligations to prevent minors from encountering adult content online, says they are “highly effective”. It states that age checks must be conducted, and further specifies that such checks cannot include age gates that simply ask users to self-declarate that they are 18 years of age or older. .

Ofcom’s list of age verification technologies approved in the UK includes provisions such as asking porn site users to upload a copy of their passport to verify their age. Show your face to the webcam to receive an AI age assessment. Alternatively, there are methods that regulators deem acceptable, such as signing into Open Banking and proving that you are not a minor.

Source: techcrunch.com