{"id":272852,"date":"2025-11-05T10:53:08","date_gmt":"2025-11-05T10:53:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/272852\/"},"modified":"2025-11-05T10:53:08","modified_gmt":"2025-11-05T10:53:08","slug":"the-chilling-effect-how-fear-of-nudify-apps-and-ai-deepfakes-is-keeping-indian-women-off-the-internet-global-development","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/272852\/","title":{"rendered":"\u2018The chilling effect\u2019: how fear of \u2018nudify\u2019 apps and AI deepfakes is keeping Indian women off the internet | Global development"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Gaatha Sarvaiya would like to post on social media and share her work online. An Indian law graduate in her early 20s, she is in the earliest stages of her career and trying to build a public profile. The problem is, with AI-powered deepfakes on the rise, there is no longer any guarantee that the images she posts will not be distorted into something violating or grotesque.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe thought immediately pops in that, \u2018OK, maybe it\u2019s not safe. Maybe people can take our pictures and just do stuff with them,\u2019\u201d says Sarvaiya, who lives in Mumbai.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe chilling effect is true,\u201d says Rohini Lakshan\u00e9, a researcher on gender rights and digital policy based in Mysuru who also avoids posting photos of herself online. \u201cThe fact that they can be so easily misused makes me extra cautious.\u201d<\/p>\n<p>The consequence of facing online harassment is silencing yourself or becoming less active onlineTarunima Prabhakar, Tattle<\/p>\n<p class=\"dcr-130mj7b\">In recent years, India has become one of the most important testing grounds for AI tools. It is the <a href=\"https:\/\/techcrunch.com\/2025\/10\/27\/openai-offers-free-chatgpt-go-for-one-year-to-all-users-in-india\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">world\u2019s second-largest market for OpenAI<\/a>, with the technology <a href=\"https:\/\/www.economist.com\/asia\/2025\/09\/17\/ai-is-erupting-in-india\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">widely adopted across professions<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">But a <a href=\"https:\/\/tattle.co.in\/blog\/make-it-real\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">report released on Monday<\/a> that draws on data collected by the Rati Foundation, a charity running a countrywide helpline for victims of online abuse, shows that the rising adoption of AI has created a powerful new way to harass women.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt has become evident in the last three years that a vast majority of AI-generated content is used to target women and gender minorities,\u201d says the report, authored by the Rati Foundation and <a href=\"https:\/\/tattle.co.in\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Tattle<\/a>, a company that works to reduce misinformation on India\u2019s social media.<\/p>\n<p class=\"dcr-130mj7b\">In particular, the report found an increase in AI tools being used to create digitally manipulated images or videos of women \u2013 either nudes or images that might be culturally appropriate in the US, but are stigmatising in many Indian communities, such as public displays of affection.<\/p>\n<p>The Indian singer Asha Bhosle, left, and journalist Rana Ayyub, who have been affected by deepfake manipulation on social media. Photograph: Getty<\/p>\n<p class=\"dcr-130mj7b\">About 10% of the hundreds of cases reported to the helpline now involve these images, the report found. \u201cAI makes the creation of realistic-looking content much easier,\u201d it says.<\/p>\n<p class=\"dcr-130mj7b\">There have been high-profile cases of Indian women in the public sphere having their images manipulated by AI tools: for example, the <a href=\"https:\/\/www.indialaw.in\/blog\/intellectual-property-rights\/asha-bhosle-wins-case-against-ai-voice-and-image-misuse\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Bollywood singer Asha Bhosle<\/a>, whose likeness and voice were cloned using AI and circulated on YouTube. Rana Ayyub, a journalist known for investigating political and police corruption, became the target of a <a href=\"https:\/\/www.theguardian.com\/technology\/2022\/feb\/08\/facebook-should-guard-against-revealing-private-addresses-board-recommends\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">doxing<\/a> campaign last year that led to <a href=\"https:\/\/rsf.org\/en\/rana-ayyub-face-india-s-women-journalists-plagued-cyber-harassment\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">deepfake sexualised images<\/a> of her appearing on social media.<\/p>\n<p class=\"dcr-130mj7b\">These have led to a society-wide conversation, in which some figures, such as Bhosle, have <a href=\"https:\/\/blogs.lse.ac.uk\/medialse\/2025\/10\/14\/from-deepfakes-to-dignity-what-bollywoods-personality-rights-battle-with-ai-tells-us\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">successfully fought for legal rights over their voice or image<\/a>. Less discussed, however, is the effect that such cases have on ordinary women who, like Sarvaiya, feel increasingly uncertain about going online.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe consequence of facing online harassment is actually silencing yourself or becoming less active online,\u201d says Tarunima Prabhakar, co-founder of Tattle. Her organisation used focus groups for two years across India to understand how digital abuse affected society.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe emotion that we have identified is fatigue,\u201d she says. \u201cAnd the consequence of that fatigue is also that you just completely recede from these online spaces.\u201d<\/p>\n<p class=\"dcr-130mj7b\">For the past few years, Sarvaiya and her friends have followed high-profile cases of deepfake online abuse, such as Ayyub\u2019s, or that of the <a href=\"https:\/\/www.theguardian.com\/film\/bollywood\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Bollywood<\/a> actor Rashmika Mandanna. \u201cIt\u2019s a little scary for women here,\u201d she says.<\/p>\n<\/p>\n<p class=\"dcr-130mj7b\">Now, Sarvaiya hesitates to post anything on social media and has made her Instagram private. Even this, she worries, will not be enough to protect her: women are sometimes photographed in public spaces such as the metro, and those <a href=\"https:\/\/www.youtube.com\/watch?v=mI6gQkUy6NQ\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">pictures can later appear online<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt\u2019s not as common as you would think it is, but you don\u2019t know your luck, right?\u201d she says. \u201cFriends of friends are getting blackmailed \u2013 literally, off the internet.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Lakshan\u00e9 says she often asks not to be photographed at events now, even those where she is a speaker. But despite taking precautions, she is prepared for the possibility that a deepfake video or image of her might surface one day. On apps, she has made her profile picture an illustration of herself rather than a photo.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThere is fear of misuse of images, especially for women who have a public presence, who have a voice online, who take political stands,\u201d she says.<\/p>\n<p><a data-ignore=\"global-link-styling\" href=\"#EmailSignup-skip-link-20\" class=\"dcr-jzxpee\">skip past newsletter promotion<\/a><\/p>\n<p class=\"dcr-rsfwa\">Sign up to Global Dispatch<\/p>\n<p class=\"dcr-1xjndtj\">Get a different world view with a roundup of the best news, features and pictures, curated by our global development team<\/p>\n<p>Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">theguardian.com<\/a> to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\/help\/privacy-policy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a>. We use Google reCaptcha to protect our website and the Google <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/privacy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a> and <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/terms\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Terms of Service<\/a> apply.<\/p>\n<p id=\"EmailSignup-skip-link-20\" tabindex=\"0\" aria-label=\"after newsletter promotion\" role=\"note\" class=\"dcr-jzxpee\">after newsletter promotion<\/p>\n<p class=\"dcr-130mj7b\">Rati\u2019s report outlines how AI tools, such as <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/apr\/28\/what-are-nudification-apps-how-would-uk-ban-work\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">\u201cnudification\u201d or nudify apps<\/a> \u2013 which can remove clothes from images \u2013 have made cases of abuse once seen as extreme far more common. In one instance it described, a woman approached the helpline after a photo she submitted with a loan application was used to extort money from her.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhen she refused to continue with the payments, her uploaded photograph was digitally altered using a nudify app and placed on a pornographic image,\u201d the report says.<\/p>\n<p class=\"dcr-130mj7b\">That photograph, with her phone number attached, was circulated on WhatsApp, resulting in a \u201cbarrage of sexually explicit calls and messages from unknown individuals\u201d. The woman told Rati\u2019s helpline that she felt \u201cshamed and socially marked, as though she had been \u2018involved in something dirty\u2019\u201d.<\/p>\n<p>A fake video ostensibly showing Rahul Gandhi, the Indian National Congress leader, and India\u2019s finance minister, Nirmala Sitharaman, promoting a financial scheme. Photograph: DAU Secretariat<\/p>\n<p class=\"dcr-130mj7b\">In India, as in most of the world, deepfakes operate in a legal grey zone \u2013 <a href=\"https:\/\/www.impriindia.com\/insights\/deepfakes-violence-laws-digital\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">no specific laws recognise them<\/a> as distinct forms of harm, although Rati\u2019s report outlines several Indian laws that could apply to online harassment and intimidation, under which women can report AI deepfakes.<\/p>\n<p class=\"dcr-130mj7b\">\u201cBut that process is very long,\u201d says Sarvaiya, who has argued that India\u2019s legal system remains ill equipped to deal with AI deepfakes. \u201cAnd it has a lot of red tape to just get to that point to get justice for what has been done.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Part of the responsibility lies with the platforms on which these images are shared \u2013 often YouTube, Meta, X, Instagram and WhatsApp. Indian law enforcement agencies describe the process of getting these companies to remove abusive content as \u201copaque, resource-intensive, inconsistent and often ineffective\u201d, according to a <a href=\"https:\/\/equalitynow.org\/news\/press-releases\/research-exposes-how-women-in-india-are-being-abused-shamed-and-silenced-online\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">report released on Tuesday by Equality Now<\/a>, which campaigns for women\u2019s rights.<\/p>\n<p class=\"dcr-130mj7b\">While <a href=\"https:\/\/www.bbc.co.uk\/news\/videos\/cx205lnplpwo\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Apple<\/a> and <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cgr58dlnne5o\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Meta have recently taken steps<\/a> to limit the spread of nudify apps, Rati\u2019s report notes several instances in which these platforms responded inadequately to online abuse.<\/p>\n<p class=\"dcr-130mj7b\">WhatsApp eventually took action in the extortion case but its response was \u201cinsufficient\u201d, Rati reported, as the nudes were already all over the internet. In another case, where an Indian <a href=\"https:\/\/www.theguardian.com\/technology\/instagram\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Instagram<\/a> creator was harassed by a troll posting nude videos of them, Instagram only responded after \u201csustained effort\u201d, with a response that was \u201cdelayed and inadequate\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Victims were often ignored when they reported harassment to these platforms, the report says, which led them to approach the helpline. Furthermore, even if a platform removed an account spreading abusive content, that content often reappeared elsewhere, in what Rati calls \u201ccontent recidivism\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cOne of the abiding characteristics of AI-generated abuse is its tendency to multiply. It is created easily, shared widely and tends to resurface repeatedly,\u201d Rati says. Addressing it \u201cwill require far greater transparency and data access from platforms themselves\u201d.<\/p>\n","protected":false},"excerpt":{"rendered":"Gaatha Sarvaiya would like to post on social media and share her work online. An Indian law graduate&hellip;\n","protected":false},"author":2,"featured_media":272853,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-272852","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/272852","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=272852"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/272852\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/272853"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=272852"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=272852"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=272852"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}