Grok is a free virtual assistant – with some paid for premium features – which responds to X users’ prompts when they tag it in a post.
Samantha Smith, a journalist who discovered users had used the AI to create pictures of her in a bikini, told the BBC’s PM programme on Friday it had left her feeling “dehumanised and reduced into a sexual stereotype”.
“While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she said.
Under the Online Safety Act, Ofcom says it is illegal to create or share intimate or sexually explicit images – including “deepfakes” created with AI – of a person without their consent.
Tech firms are also expected to take “appropriate steps” to reduce the risks of UK users encountering such content, and take it down “quickly” when made aware of it.
Meanwhile, European Commission spokesperson Thomas Regnier said on Monday it was aware of posts made by Grok “showing explicit sexual content”, as well as “some output generated with childlike images”.
“This is illegal,” he said, also calling it “appalling” and “disgusting”.
“This is how we see it, and this has no place in Europe,” he said.
Regnier said X was “well aware” the EU was “very serious” about enforcing its rules for digital platforms – having handed X a €120m (£104m) fine in December for breaching its Digital Services Act.
A Home Office spokesperson said it was legislating to ban nudification tools, and under a new criminal offence, anyone who supplied such tech would “face a prison sentence and substantial fines”.
Additional reporting by Chris Vallance