IIt started with an anonymous email. It read, “That's true. I'm sorry to have to contact you.” Below that word were three links to internet forums. “HUGE trigger warning…they contain vile photoshopped images of you.”
Jody (not her real name) froze. The 27-year-old from Cambridgeshire has had problems in the past with her photos stolen to set up dating profiles and social media accounts. She called the police, but was told there was nothing they could do and pushed it to the back of her mind.
However, I couldn't ignore this email that arrived on March 10, 2021. She clicked on the link. “It was like time stood still,” she said. “I remember screaming so loud. I just completely broke down.”
Forum, an alternative porn website, has hundreds of photos of her alone, on holiday and with friends and housemates, alongside a caption labeling them as 'sluts'. The comments included calling her a “slut” and “prostitute,” asking people to rate her, and asking her what kind of fantasies she had. they will.
The person who posted the photo also shared the invitation with other members of the forum. It involved using artificial intelligence to create sexually explicit “deepfakes,” digitally altered content, using fully clothed photos of Jodi taken from her private Instagram.
“I've never done anything like this before but I love seeing her being fake…happy to chat and show more of her too…:D,” they wrote. Ta. In response, users posted hundreds of composite images and videos of the woman's body and Jodi's face. One posted an image of her wearing high school girl clothes and being raped by a teacher in a classroom. Others showed her full “nude”. “I was having sex in every room,” she said. “The shock and devastation still haunts me.”
The now-deleted fake images show that a growing number of synthetic, sexually explicit photos and videos are being created, traded and sold across social media apps, private messages and gaming platforms in the UK and around the world. Masu. As well as adult forums and porn sites.
Last week, the government announced a “crackdown” on blatant deepfakes, expanding current laws that make it a criminal offense not only to share images, but also to create them without consent, which will be illegal from January 2024. I promised. Someone making them for you – is not going to be covered. The government will also ask whether the crime was consensual (campaigners say it must be) or whether the victim can prove that the perpetrator had malicious intent. I haven't confirmed whether it is necessary or not yet.
At the Revenge Porn Helpline's headquarters in a business park on the outskirts of Exeter, senior practitioner Kate Worthington, 28, says stronger laws with no loopholes are desperately needed.
Launched in 2015, the helpline is a dedicated service for victims of intimate image abuse, part-funded by the Home Office. Deepfake incidents are at an all-time high, with reports of synthetic image abuse increasing by 400% since 2017. However, it remains small compared to overall intimate image abuse. There were 50 incidents last year, accounting for about 1% of the total. caseload. The main reason is that it's vastly underreported, Worthington says. “Victims often don't know their images are being shared.”
The researchers found that many perpetrators of deepfake image abuse appear to be motivated by “collector culture.” “A lot of times it's not with the intention of the person knowing,” Worthington said. “Buyed, sold, exchanged, traded for sexual gratification or for status. If you are finding this content and sharing it alongside your Snap handle, Insta handle, or LinkedIn profile. , you may receive glory.'' Many are created using the “Nude'' app. In March, a charity that runs a revenge porn helpline reported 29 such services to Apple, which removed them.
There have also been cases where composite images have been used to directly threaten or humiliate people. The helpline has heard cases of boys creating fake incestuous images of female relatives. A man addicted to porn creates a composite photo of his partner engaging in non-consensual sex in real life. Stories of people who were photographed at the gym and deepfake videos made to make it look like they were having sex. Most, but not all, of those targeted are women. Approximately 72% of the deepfake incidents identified by the helpline involved women. The oldest was in his 70s.
There have also been cases where Muslim women have been targeted with deepfake images of themselves wearing revealing clothing or without their hijabs.
Regardless of intent, the impact is often extreme. “Many of these photos are so realistic that your coworkers, neighbors, and grandma won't be able to tell the difference,” says Worthington.
The Revenge Porn Helpline helps people remove abusive images. Amanda Dashwood, 30, who has worked at the helpline for two years, says this is usually a caller's priority. “It says, 'Oh my God, help me. I need to delete this before people see it,'” she says.
She and her colleagues on the helpline team, eight women, most under 30, have a variety of tools at their disposal. If the victim knows where the content was posted, the team will issue a takedown request directly to the platform. Some people ignore the request completely. However, this helpline has partnered with most of the major helplines, from Instagram and Snapchat to Pornhub and OnlyFans, and has a successful removal rate of 90%.
If the victim doesn't know where the content was posted, or suspects it's being shared more widely, they can send a selfie to be run through facial recognition technology (with their consent) or vice versa. Ask them to use image search. tool. Although this tool is not foolproof, it can detect material being shared on the open web.
The team can also advise you on steps to stop your content from being posted online again. They plan to direct people to a service called StopNCII. The tool was created by online safety charity SWGFL, which also runs a revenge porn helpline, with funding from Meta.
Users can upload real or synthetic photos, and the technology creates a unique hash and shares it with partner platforms such as Facebook, Instagram, TikTok, Snapchat, Pornhub, and Reddit (but not X or Discord). If someone tries to upload that image, it will be automatically blocked. As of December, 1 million images had been hashed and 24,000 uploads were proactively blocked.
Some people call the police, but responses vary widely depending on the force used. Victims who try to report fraudulent use of composite images are told that police cannot cooperate with edited images or that prosecution is not in the public interest.
Helpline manager Sophie Mortimer recalls another incident in which police said: “No, that's not you. It's not you.” It’s someone who looks like you,” and refused to investigate. “I feel like police sometimes look for reasons not to pursue these types of cases,” Mortimer said. “We know it's difficult, but that doesn't negate the real harm that's being caused to people.”
In November, Sam Miller, assistant chief constable and director of the violence against women and girls strategy at the National Police Chiefs' Council, told a parliamentary inquiry into intimate image abuse that police lacked a “deep understanding of violent behavior”. I'm worried,” he said. Discrepancies in laws and precedents. “Yesterday, one victim told me that out of the 450 victims of deepfake images she has spoken to, only two have had a positive experience with law enforcement,” she said. Ta.
For Jodi, it is clear that there is a need to raise awareness of the misuse of deepfakes, not only among law enforcement but also the general public.
After being alerted to her deepfake, she spent hours scrolling through posts trying to piece together what happened.
She noticed that they were not shared by strangers, but by her close friends alex wolf, a Cambridge University graduate and former BBC Young Composer of the Year. He had posted a photo of her with a cut out of him. “I knew I hadn't posted that photo on Instagram and only sent it to him. That's when the penny dropped.”