Experts concerned after Grok chatbot ‘un-redacts’ images of children in Epstein filespublished at 12:01 GMT

12:01 GMT

Thomas Copeland
BBC Verify Live journalist

AI ethics experts have said they are deeply concerned after BBC Verify found repeated cases of X’s AI chatbot Grok attempting to remove redactions from images of children released by the US Department of Justice as part of the Epstein files.

In one post viewed almost 24 millions times, Grok responds to a user request to “unblur” an image of a child in a swimming pool next to convicted sex offender Jeffrey Epstein.

These supposedly un-redacted faces made by Grok are not real, they are a prediction based on pictures the AI has been trained on, so they do not reveal the real child’s identity.

But Gina Neff, professor of responsible AI at Queen Mary University said these images “still do serious damage to our information sphere”.

“They disrespect the real victims by suggesting their right to privacy is little more than an online game,” Neff told BBC Verify.

Tanya Goodin, CEO of EthicAI, added “this is inevitably what happens when technology is built with absolutely no safety guardrails”.

We’ve contacted X to ask why the platform is allowing Grok to respond to users’ request to unblur the redacted images but haven’t heard back.

A comparison of two images. The real image shows Epstein next to a swimming pool and a child with a large black rectangle of their face is next to him in the pool, an AI maniupuilated version has attempted to put a face to the child and this has been marked by the BBC as with a large red cross.