A disturbing and dangerous trend has surfaced on X, with users misusing the platform’s AI tool, Grok, to morph photographs of women and children into sexually compromising images. The development has triggered global outrage and renewed concerns over AI-enabled sexual abuse.

The trend began a couple of days ago and escalated on New Year’s Eve, spreading rapidly across the platform. Users were seen issuing direct prompts to Grok to digitally manipulate images of women and children, turning ordinary photographs into explicit and abusive content. These images were then circulated widely without consent, exposing victims to humiliation, harassment, and harm.

Women’s rights activists and users across countries have been mounting intense pressure on Elon Musk to immediately fix the feature that allows such abuse. While X has reportedly hidden Grok’s media feature, the misuse has not stopped. Images can still be morphed, shared, and accessed on the platform.
The trend has now reached Indian users on X, with experts warning that the issue goes far beyond online mischief or trolling. Cyber-safety specialists and gender-rights advocates say the morphing of images using AI amounts to a form of sexual violence, particularly when it involves women and children. They argue that such acts violate dignity, bodily autonomy, and consent, and can cause severe psychological trauma to victims whose images are weaponised without their knowledge.
The continued availability of morphed images on X, despite partial restrictions, has intensified criticism that the platform is failing to adequately protect users. Worried women users are deleting their pictures.

Cyber-security expert Ritesh Bhatia told CNBC-TV18, “Why are we asking or expecting victims to be careful at all? This isn’t about caution; it’s about accountability. When a platform like Grok even allows such prompts to be executed, the responsibility squarely lies with the intermediary. Technology is not neutral when it follows harmful commands. If a system can be instructed to violate dignity, the failure is not human behaviour alone — it is design, governance, and ethical neglect. Creators of Grok need to take immediate action.”

Discussing legal remedies, cyber-law expert Adv. Prashant Mali told CNBC-TV18, “I feel this is not mischief — it is AI-enabled sexual violence. Victims have clear remedies under the IT Act, 2000, especially Sections 66E (violation of privacy) and 67/67A (publishing or transmitting obscene or sexually explicit content), which squarely cover AI-generated morphed images even if no physical act occurred.

“Under the Bharatiya Nyaya Sanhita, 2023, Section 77 (voyeurism) and allied provisions on sexual harassment and the dignity of women criminalise creation and circulation of such material, recognising harm to autonomy, not just physical exposure. Where the victim is a minor, POCSO is triggered immediately, with Sections 11, 12, 13, and 14 treating AI-generated sexualised images as aggravated sexual exploitation, regardless of ‘virtual’ excuses, making punishment swift and non-negotiable. Add to this the Intermediary Rules, which mandate rapid takedown and traceability.”

He further added, “The legal framework is robust on paper. The real challenge lies in the speed of enforcement and digital-forensics capacity, not the absence of law. I also feel the defence of ‘it was just an AI’ will not survive judicial scrutiny.”

As calls grow louder for accountability, activists are demanding stricter controls on AI image tools, swift takedown mechanisms, and legal action against those generating and circulating abusive content. The Grok controversy has once again exposed the darker side of generative AI and raised urgent questions about whether social-media platforms are equipped — or willing — to prevent technology from being used as a tool of sexual harm.