Olivia Williams Advocates for ‘Nude Rider’ Style Regulations for AI Body Scanning in Acting

In light of rising apprehensions regarding the effects of artificial intelligence on performers, actress Olivia Williams emphasized that actors should handle data obtained from body scans similarly to how they approach nude scenes.

The star of Dune: Prophecy and The Crown stated that she and fellow actors often face mandatory body scans by on-set cameras, with scant assurances on the usage and destination of that data.

“It would be reasonable to adhere to the ‘Nude Rider’ standard,” she noted. “This footage should only be used within that specific scene; it must not be repurposed elsewhere. Furthermore, any edited scenes must be removed across all formats.”

Williams drew attention to a vague provision in contracts that seems to grant studios extensive rights to use images of performers “on every platform currently existing or created in the future worldwide, indefinitely.”

A renewed conversation about AI’s impact on actors has been ignited by widespread criticism of the development of an AI performer named Tilly Norwood. Actors fear their likenesses and poses will be utilized to train AI systems, potentially threatening their employment.

Actors, stunt performers, dancers, and supporting actors relayed to the Guardian that they felt “ambushed” and compelled to participate in body scans on set. Many reported there was little time to discuss how the generated data would be handled or whether it could be used for AI training purposes.

Ms. Williams recounted her unsuccessful attempts to eliminate the ambiguous clause from her contract. She explored options for obtaining a limited license to control her body scan data, but her lawyer advised her that the legal framework was too uncertain. The costs of trying to reclaim the data were prohibitively high.

“I’m not necessarily looking for financial compensation for the use of my likeness,” she remarked. “What concerns me is being depicted in places I’ve never been, engaging in activities I’ve never done, or expressing views I haven’t shared.”

“Laws are being enacted, and no one is intervening. They’re establishing a precedent and solidifying it. I sign these contracts because not doing so could cost me my career.”

Williams expressed that she is advocating for younger actors who have scant options but to undergo scans without clear information regarding the fate of their data. “I know a 17-year-old girl who was encouraged to undergo the scan and complied, similar to the scene from Chitty Chitty Bang Bang. Being a minor, a chaperone was required to consent, but her chaperone was a grandmother unfamiliar with the legal implications.”

The matter is currently under discussion between Equity, the UK performing arts union, and Pact, the trade body of the UK film industry. “We are urging for AI safeguards to be integrated into major film and television contracts to prioritize consent and transparency for on-set scanning,” stated Equity Executive Director Paul W. Fleming.

“It is achievable for the industry to implement essential minimum standards that could significantly transform conditions for performers and artists in British TV and film.”

Pact issued a statement saying: “Producers are fully aware of their responsibilities under data protection legislation, and these concerns are being addressed during collective negotiations with Equity. Due to the ongoing talks, we are unable to provide further details.”

Source: www.theguardian.com

Commissioner Advocates for Ban on Apps Creating Deepfake Nude Images of Children

The “nudifice” app utilizing artificial intelligence to generate explicit sexual images of children is raising alarms, echoing concerns from English children’s commissioners amidst rising fears for potential victims.

Girls have reported refraining from sharing images of themselves on social media due to fears that generative AI tools could alter or sexualize their clothing. Although creating or disseminating sexually explicit images of children is illegal, the underlying technology remains legal, according to the report.

“Children express fear at the mere existence of this technology. They worry strangers, classmates, or even friends might exploit smartphones to manipulate them, using these specialized apps to create nude images,” a spokesperson stated.

“While the online landscape is innovative and continuously evolving, there’s no justifiable reason for these specific applications to exist. They have no rightful place in our society, and tools that enable the creation of naked images of children using deepfake technology should be illegal.”

De Souza has proposed an AI bill mandating that developers of generative AI tools address product functionalities, and has urged the government to implement an effective system for eliminating explicit deepfake images of children. This initiative should be supported by policy measures recognizing deep sexual abuse as a form of violence against women and girls.

Meanwhile, the report calls on Ofcom to ensure diligent age verification of nudification apps, and for social media platforms to restrict access to sexually explicit deepfake tools targeted at children, in accordance with online safety laws.

The findings revealed that 26% of respondents aged 13 to 18 had encountered deep, sexually explicit images of celebrities, friends, teachers, or themselves.

Many AI tools reportedly focus solely on female bodies, thereby contributing to an escalating culture of misogyny, the report cautions.

An 18-year-old girl conveyed to the commissioner:

The report highlighted cases like that of Mia Janin, who tragically died by suicide in March 2021, illustrating connections between deepfake abuse, suicidal thoughts, and PTSD.

In her report, De Souza stated that new technologies confront children with concepts they struggle to comprehend, evolving at a pace that overwhelms their ability to recognize the associated hazards.

The lawyer explained to the Guardian that this reflects a lack of understanding regarding the repercussions of actions taken by young individuals arrested for sexual offenses, particularly concerning deepfake experimentation.

Daniel Reese Greenhalgh, a partner at Cokerbinning law firm, noted that the existing legal framework poses significant challenges for law enforcement agencies in identifying and protecting abuse victims.

She indicated that banning such apps might ignite debates over internet freedom and could disproportionately impact young men experimenting with AI software without comprehension of the consequences.

Reece-Greenhalgh remarked that while the criminal justice system strives to treat adolescent offenses with understanding, previous efforts to mitigate criminality among youth have faced challenges when offenses occur in private settings, leading to unintended consequences within schools and communities.

Matt Hardcastle, a partner at Kingsley Napley, emphasized the “online youth minefield” surrounding access to illegal sexual and violent content, noting that many parents are unaware of how easily their children can encounter situations that lead to harmful experiences.

“Parents often view these situations from their children’s perspectives, unaware that their actions can be both illegal and detrimental to themselves or others,” he stated. “Children’s brains are still developing, leading them to approach risk-taking very differently.”

Marcus Johnston, a criminal lawyer focusing on sex crimes, reported working with an increasingly youthful demographic involved in such crimes, often without parental awareness of the issues at play. “Typically, these offenders are young men, seldom young women, ensnared indoors, while parents mistakenly perceive their activities as mere games,” he explained. “These offenses have emerged largely due to the internet, with most sexual crimes now taking place online, spearheaded by forums designed to cultivate criminal behavior in children.”

A government spokesperson stated:

“It is appallingly illegal to create, possess, or distribute child sexual abuse material, including AI-generated images. Platforms of all sizes must remove this content or face significant fines as per online safety laws. The UK is pioneering the introduction of AI-specific child sexual abuse offenses, making it illegal to own, create, or distribute tools crafted for generating abhorrent child sexual abuse material.”

  • In the UK, the NSPCC offers support to children at 0800 1111 and adults concerned about children can reach out at 0808 800 5000. The National Association of People Abused in Childhood (NAPAC) supports adult survivors at 0808 801 0331. In Australia, children, young adults, parents, and educators can contact the 1800 55 1800 helpline for children, or Braveheart at 1800 272 831. Adult survivors may reach the Blue Knot Foundation at 1300 657 380.

Source: www.theguardian.com

Tackling the Issue of Pedophiles Using AI to Generate Nude Images of Children for Extortion, Charity Warns

An organization dedicated to fighting child abuse has reported that pedophiles are being encouraged to utilize artificial intelligence to generate nude images of children and coerce them into producing more explicit content.

The Internet Watch Foundation (IWF) stated that a manual discovered on the dark web included a section advising criminals to use a “denuding” tool to strip clothing from photos sent by children. These photos could then be used for blackmail purposes to obtain further graphic material.

The IWF expressed concern over the fact that perpetrators are now discussing and promoting the use of AI technologies for these malicious purposes.


The charity, known for identifying and removing child sexual abuse content online, initiated an investigation into cases of sextortion last year. They observed a rise in incidents where victims were coerced into sharing explicit images under threat of exposure. Additionally, the use of AI to create highly realistic abusive content was noted.

The author of the online manual, who remains anonymous, claimed to have successfully coerced 13-year-old girls into sharing nude images online. The IWF reported the document to the UK National Crime Agency.

Recent reports by The Guardian suggested that there were discussions within the Labour party about banning tools that create nude imagery.

According to the IWF, 2023 witnessed a record number of extreme cases of child sexual abuse. Over 275,000 web pages containing such material, including content depicting rape, sadism, and bestiality, were identified, marking the highest number on record. This included a significant amount of Category A content, the most severe form containing explicit and harmful images.

The IWF further discovered 2,401 images of self-produced child sexual abuse material involving children aged three to six, where victims were manipulated or threatened to record their own abuse. The incidents were observed in domestic settings like bedrooms and kitchens.

Susie Hargreaves, the CEO of IWF, emphasized the urgent need to educate children on recognizing danger and safeguarding themselves against manipulative criminals. She stressed the importance of the recently passed Online Safety Act to protect children on social media platforms.

Security Minister Tom Tugendhat advised parents to engage in conversations with their children about safe internet usage. He emphasized the responsibility of tech companies to implement stronger safeguards against abuse.

Research published by Ofcom revealed that a significant percentage of young children own mobile phones and engage in social media. The government is considering measures such as raising the minimum age for social media use and restricting smartphone sales to minors.

Source: www.theguardian.com

Family brings battle against deepfake nude images to Washington | Deepfakes

Francesa Mani returned home from school in suburban New Jersey last October and shared shocking news with her mother, Dorota.

At Westfield High School, a 14-year-old girl and her friends were targeted with abuse through the distribution of fake nude images created using artificial intelligence.

Dorota, aware of the power of this technology, was surprised by how easily the images were generated.

She expressed her disbelief, stating, “With just a single image, I didn’t anticipate how quickly this could happen. It’s a risk for anyone at the simple click of a button.”

An investigation by The Guardian’s Black Box podcast series revealed the origins and operators of an app called ClothOff, which was used to create the explicit images at Westfield High School.

Francesca and Dorota decided to take action after feeling dissatisfied with the school board’s response to the incident. They began advocating for new legislation at both the state and federal levels to hold creators of non-consensual, sexually explicit deepfakes accountable.

The growing number of cases like the one at Westfield High School has highlighted the gaps in existing laws and the urgent need for stronger protections, especially for minors.

NCMEC is collaborating with the Mani family to investigate the further spread of the images generated at the school.

While the school district initiated an investigation and offered counseling to affected students, the lack of criminal repercussions for the perpetrators due to current laws is a major concern for the victims’ families.

ClothOff denied involvement in the incident and suggested that a competing app may have been responsible.

Francesca and Dorota’s efforts have led to the introduction of bills in Congress to criminalize the sharing of AI-generated images without consent and provide victims with legal recourse.

Despite bipartisan support for these bills, progress has been slow due to other pressing issues in government, but efforts to address the misuse of AI technology continue at both the state and federal levels.

A bipartisan push to create deterrents against the creation and dissemination of deepfakes is gaining momentum as more states consider legislation to address the issue.

Incidents similar to the one at Westfield High School have occurred across the country, highlighting the urgent need for comprehensive laws to combat the misuse of AI technology.

Francesca and Dorota, along with other affected families, are committed to ensuring accountability for those responsible for creating and distributing deepfake images.

Their advocacy has drawn attention to the need for stronger legal protections against AI-generated deepfakes, emphasizing the importance of preventing further harm to vulnerable individuals.

Source: www.theguardian.com