Chilling Effect: How Fear of ‘Naked’ Apps and AI Deepfakes is Driving Indian Women Away from the Internet

Gaatha Sarvaiya enjoys sharing her artistic endeavors on social media. As a law graduate from India in her early 20s, she is at the outset of her professional journey, striving to attract public interest. However, the emergence of AI-driven deepfakes poses a significant threat, making it uncertain whether the images she shares will be transformed into something inappropriate or unsettling.

“I immediately considered, ‘Okay, maybe this isn’t safe. People could take our pictures and manipulate them,'” Sarvaiya, who resides in Mumbai, expresses.

“There is certainly a chilling effect,” notes Rohini Lakshane, a gender rights and digital policy researcher based in Mysore. He too refrains from posting photos of himself online. “Given how easily it can be exploited, I remain particularly cautious.”

In recent years, India has emerged as a crucial testing ground for AI technologies, becoming the second-largest market for OpenAI with the technology being widely embraced across various professions.

However, a report released recently reveals that the growing usage of AI is generating formidable new avenues for harassment directed at women, according to data compiled by the Rati Foundation, which operates a national helpline for online abuse victims.

“Over the past three years, we’ve identified that a significant majority of AI-generated content is utilized to target women and sexual minorities,” the report, prepared by Tuttle, a company focused on curbing misinformation on social media in India, asserts.

The report highlights the increasing use of AI tools for digitally altering images and videos of women, including nudes and culturally sensitive content. While these images may be accepted in Western cultures, they are often rebuked in numerous Indian communities for their portrayal of public affection.




Indian singer Asha Bhosle (left) and journalist Rana Ayyub are victims of deepfake manipulations on social media. Photo: Getty

The findings indicated that approximately 10% of the numerous cases documented by the helpline involve such altered images. “AI significantly simplifies the creation of realistic-looking content,” the report notes.

There was a notable case where an Indian woman’s likeness was manipulated by an AI tool in a public location. Bollywood singer Asha Bhosle‘s image and voice were replicated using AI and distributed on YouTube. Journalist Rana Ayyub faced a campaign targeting her personal information last year, with deepfake sexual images appearing of her on social media.

These instances sparked widespread societal discussions, with some public figures like Bhosle asserting that they have successfully claimed legal rights concerning their voice and image. However, the broader implications for everyday women like Sarvaiya, who increasingly fear engaging online, are less frequently discussed.

“When individuals encounter online harassment, they often self-censor or become less active online as a direct consequence,” explains Tarunima Prabhakar, co-founder of Tattle. Her organization conducted focus group research for two years across India to gauge the societal impacts of digital abuse.

“The predominant emotion we identified is one of fatigue,” she remarks. “This fatigue often leads them to withdraw entirely from online platforms.”

In recent years, Sarvaiya and her peers have monitored high-profile deepfake abuse cases, including those of Ayyub and Bollywood actress Rashmika Mandanna. “It’s a bit frightening for women here,” she admits.

Currently, Sarvaiya is reluctant to share anything on social media and has opted to keep her Instagram account private. She fears this measure may not suffice to safeguard her. Women are sometimes captured in public places, such as subways, with their photos potentially surfacing online later.

“It’s not as prevalent as some might believe, but luck can be unpredictable,” she observes. “A friend of a friend is actually facing threats online.”

Lakshane mentions that she often requests not to be photographed at events where she speaks. Despite her precautions, she is mentally preparing for the possibility that a deepfake image or video of her could emerge. In the app, her profile image is an illustration of herself, rather than a photo.

“Women with a public platform, an online presence, and those who express political opinions face a significant risk of image misuse,” she highlights.

Skip past newsletter promotions

Rati’s report details how AI applications, such as “nudification” and nudity apps designed to remove clothing from images, have normalized behaviors that were once seen as extreme. In one reported case, a woman approached the helpline after her photo, originally submitted for a loan application, was misused for extortion.

“When she declined to continue payments, her uploaded photo was digitally altered with the nudify app and superimposed onto a pornographic image,” the report details.

This altered image, accompanied by her phone number, was circulated on WhatsApp, resulting in a flood of sexually explicit calls and messages from strangers. The woman expressed to the helpline that she felt “humiliated and socially stigmatized, as though I had ‘become involved in something sordid’.”




A fake video allegedly featuring Indian National Congress leader Rahul Gandhi and Finance Minister Nirmala Sitharaman promoting a financial scheme. Photo: DAU Secretariat

In India, similar to many regions globally, deepfakes exist within a legal gray area. Although certain statutes may prohibit them, Rati’s report highlights existing laws in India that could apply to online harassment and intimidation, enabling women to report AI deepfakes as well.

“However, the process is often lengthy,” Sarvaiya shares, emphasizing that India’s legal framework is not adequately prepared to address issues surrounding AI deepfakes. “There is a significant amount of bureaucracy involved in seeking justice for what has occurred.”

A significant part of the problem lies with the platforms through which such images are disseminated, including YouTube, Meta, X, Instagram, and WhatsApp. Indian law enforcement agencies describe the process of compelling these companies to eliminate abusive content as “often opaque, resource-draining, inconsistent, and ineffective,” according to a report published by Equality Now, an organization advocating for women’s rights.

Meanwhile, Apple and Meta have recently responded accordingly. Rati’s report uncovers multiple instances where these platforms inadequately addressed online abuse, thereby exacerbating the spread of the nudify app.

Although WhatsApp did respond in the extortion scenario, the action was deemed “insufficient” since the altered images had already proliferated across the internet, Rati indicated. In another instance, an Instagram creator in India was targeted by a troll who shared nude clips, yet Instagram only reacted after “persistent efforts” and with a “delayed and inadequate” response.


The report indicates that victims reporting harassment on these platforms often go unheard, prompting them to reach out to helplines. Furthermore, even when accounts disseminating abusive material are removed, such content tends to resurface, a phenomenon Rati describes as “content recidivism.”

“One persistent characteristic of AI abuse is its tendency to proliferate: it is easily produced, broadly shared, and repeated multiple times,” Rati states. Confronting this issue “will necessitate much greater transparency and data accessibility from the platforms themselves.”

Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *