Survey Reveals 1 in 4 People Unconcerned About Non-Consensual Sexual Deepfakes

A study commissioned by law enforcement revealed that 25% of individuals either believe there is no issue with creating and sharing sexual deepfakes or feel indifferent, regardless of the subject’s consent.

In response to these findings, a senior official in law enforcement cautioned that AI is exacerbating the crisis of violence against women and girls (VAWG), with tech companies being complicit in this misconduct.

A survey involving 1,700 participants, commissioned by the Office of the Chief Scientific Adviser, found that 13% were comfortable with creating and sharing sexual or intimate deepfakes (content manipulated using AI without consent).

Additionally, 12% of respondents felt neutral about the moral and legal acceptability of creating and sharing such deepfakes.

Det. Church Supt Claire Hammond of the VAWG and National Center for Civil Protection emphasized that “distributing intimate images of someone without their consent, regardless of whether they are authentic, is a serious crime.”

Discussing the survey results, she remarked: “The rise of AI technology is accelerating the violence against women and girls globally. Tech companies bear responsibility for enabling this abuse, facilitating the creation and dissemination of harmful material with ease. Immediate action is required.”

She encouraged anyone affected by deepfakes to report them to authorities. Ms. Hammond stated: “This is a serious crime, and we are here to support you. Nobody should endure pain or shame in silence.”

Under new data laws, the creation of sexually explicit deepfakes without consent will be classified as a criminal offense.

A report from crime and justice consultancy Crest Advisory indicated that 7% of participants had been portrayed in a sexual or intimate deepfake. Of those, only 51% reported the incident to law enforcement. Among those who remained silent, common reasons included embarrassment and doubts regarding the seriousness of the crime being taken.

The data also pointed out that men under 45 were more likely to be involved in the creation and sharing of deepfakes. This demographic also tended to consume pornographic content, hold misogynistic views, and have a favorable attitude toward AI. However, the report noted that the correlation between age, gender, and such beliefs is weak, calling for more research to delve deeper into this connection.

One in 20 respondents admitted to having created a deepfake previously, while over 10% expressed willingness to do so in the future. Moreover, two-thirds reported having seen or potentially seen a deepfake.

Karian Desroches, the report’s author and head of policy and strategy at Crest Advisory, cautioned that the creation of deepfakes is “growing increasingly common as technology becomes more affordable and accessible.”

“While some deepfake content might seem innocuous, the majority is of a sexual nature and predominantly directed at women.”

“We are profoundly alarmed by our findings: a demographic of young individuals who actively consume pornography, exhibit misogynistic attitudes, and perceive no harm in creating or sharing sexual deepfakes of others without consent.”

“We are living in troubling times, and without immediate and concerted action in the digital arena, we jeopardize the futures of our daughters (and sons),” said Carrie Jane Beach, an advocate for stronger protections for deepfake abuse victims.

Moreover, she stated: “We are witnessing a generation of children growing up devoid of protections, laws, or regulations addressing this matter, leading to dire consequences of such unregulated freedom.

“Confronting this issue starts at home. To have any hope of elimination, we must prioritize education and foster open discussions every day.”

Source: www.theguardian.com