{"id":396385,"date":"2026-01-08T15:53:10","date_gmt":"2026-01-08T15:53:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/396385\/"},"modified":"2026-01-08T15:53:10","modified_gmt":"2026-01-08T15:53:10","slug":"elon-musks-ai-chatbot-grok-under-fire-for-failing-to-rein-in-digital-undressing","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/396385\/","title":{"rendered":"Elon Musk\u2019s AI chatbot Grok under fire for failing to rein in \u2018digital undressing\u2019"},"content":{"rendered":"<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s2ap3001y2cphacviffqo@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Elon Musk\u2019s AI chatbot, Grok, has\u202fbeen flooded with sexual images of mainly women, many of them real people. Users have prompted the chatbot to to \u201cdigitally undress\u201d those people and sometimes place them in suggestive poses.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptu000b356pz0oc7qbk@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            In several cases last week, some appeared to be images of minors, leading to the creation of images that many users are calling child pornography.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptu000c356pxsdheblc@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The AI-generated images highlight the dangers of AI and social media \u2013 especially in combination \u2013 without sufficient guardrails to protect some of society\u2019s most vulnerable. The images could violate domestic and international laws and place many people, including children, in harm\u2019s way.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptu000d356pk7t77sy2@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Musk and xAI have said that they are taking action \u201cagainst illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.\u201d But Grok\u2019s responses to user requests are still flooded with images sexualizing women.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptu000e356phqswv9wh@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Publicly, Musk has long <a href=\"https:\/\/www.cnn.com\/2025\/06\/27\/tech\/grok-4-elon-musk-ai\" rel=\"nofollow noopener\" target=\"_blank\">advocated against<\/a> \u201cwoke\u201d AI models and against what he calls censorship. Internally at xAI, Musk has pushed back against guardrails for Grok, one source with knowledge of the situation at xAI told CNN. Meanwhile, his xAI\u2019s safety team, already small compared to its competitors, lost several staffers in the weeks leading up to the explosion of \u201cdigital undressing.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000g356pg69605xr@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Grok has always been an outlier compared to other mainstream AI models by allowing, and in some cases promoting, sexually explicit content and companion avatars.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000h356pyo54fa70@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            And in contrast to competitors such as Google\u2019s Gemini or OpenAI\u2019s ChatGPT, Grok is built into one of the most popular social media platforms, X. While users can talk to Grok privately, they can also tag Grok in a post with a request, and Grok will respond publicly.\n    <\/p>\n<p>       <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/gettyimages-2255064600.jpg\" alt=\"An iPhone screen displaying the Grok app and logo on January 7, 2026.\" class=\"image_large__dam-img image_large__dam-img--loading\" onload=\"this.classList.remove('image_large__dam-img--loading')\" onerror=\"imageLoadError(this)\" height=\"3333\" width=\"5000\" loading=\"lazy\"\/><\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000j356pajp3ixvl@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The recent surge in widespread, non-consensual \u201cdigital undressing\u201d began in late December, when many users discovered they could tag Grok and ask it to edit images from an X post or thread.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000k356psdus1jvr@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Initially many posts requested Grok put people in bikinis. Musk reposted images of himself and others, like longtime nemesis Bill Gates, in bikinis.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000l356plmni27yy@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Researchers at <a href=\"https:\/\/copyleaks.com\/blog\/grok-and-nonconsensual-image-manipulation\" target=\"_blank\" rel=\"nofollow noopener\">Copyleaks<\/a>, an AI detection and content governance platform, found that the trend may have started when adult-content creators prompted Grok to generate sexualized imagery of themselves as a form of marketing. But almost immediately \u201cusers began issuing similar prompts about women who had never appeared to consent to them,\u201d Copyleaks found.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000m356plxb7sxdb@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Researchers at AI Forensics, a European non-profit that investigates algorithms, analyzed over 20,000 random images generated by Grok and 50,000 user requests between December 25 and January 1.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000n356pabjthqwr@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The researchers found \u201ca high prevalence of terms including \u2018her\u2019 \u2018put\u2019\/\u2019remove,\u2019 \u2018bikini,\u2019 and \u2018clothing.\u2019\u201d More than half of the images generated of people, or 53%, \u201ccontained individuals in minimal attire such as underwear or bikinis, of which 81% were individuals presenting as women,\u201d the researchers found. Notably, 2% of images depicted people appearing to be 18 years old or younger, the researchers found.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000o356p72nf3vvh@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            AI Forensics also found that in some cases, users requested minors be put in erotic positions and that sexual fluids be depicted on their bodies. Grok complied with those requests, according to AI Forensics.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000p356pz221wgyg@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Although X allows pornographic content, xAI\u2019s own \u201cacceptable use policy\u201d prohibits \u201cDepicting likenesses of persons in a pornographic manner\u201d and \u201cThe sexualization or exploitation of children.\u201d X has suspended some accounts for these kinds of requests and removed the images.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000q356p4zbf1qcc@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            On January 1, an <a href=\"https:\/\/x.com\/alexinexxx\/status\/2006726785833263479\" target=\"_blank\" rel=\"nofollow\">X user<\/a> complained that \u201cproposing a feature that surfaces people in bikinis without properly preventing it from working on children is wildly irresponsible.\u201d An xAI staffer <a href=\"https:\/\/x.com\/ParsaTajik\/status\/2006815682466550194\" target=\"_blank\" rel=\"nofollow\">replied: <\/a>\u201cHey! Thanks for flagging. The team is looking into further tightening our gaurdrails (sic).\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000r356pxdwunla6@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            When prompted by users, Grok itself acknowledged that it generated some images of minors in sexually suggestive situations.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000s356p345v7e01@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cWe appreciate you raising this. As noted, we\u2019ve identified lapses in safeguards and are urgently fixing them\u2014CSAM is illegal and prohibited,\u201d Grok <a href=\"https:\/\/x.com\/grok\/status\/2007006470689214749\" target=\"_blank\" rel=\"nofollow\">posted<\/a> on January 2, directing users to file formal reports with the FBI and the National Center for Missing and Exploited Children.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000t356pive7r5zh@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            By January 3, <a href=\"https:\/\/x.com\/elonmusk\/status\/2007475612949102943\" target=\"_blank\" rel=\"nofollow\">Musk<\/a> himself commented on a separate post: \u201cAnyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000u356pzc6h5lzh@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            X\u2019s Safety account followed up, <a href=\"https:\/\/x.com\/Safety\/status\/2007648212421587223\" target=\"_blank\" rel=\"nofollow\">adding:<\/a> \u201cWe take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000w356p5wjyogxv@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Musk has long railed against what he sees as heavy-handed censorship. And he\u2019s promoted Grok\u2019s more explicit versions. In August, he <a href=\"https:\/\/x.com\/elonmusk\/status\/1954791048934244394\" target=\"_blank\" rel=\"nofollow\">posted<\/a> that \u201cspicy mode\u201d has helped new technologies in the past, like VHS, succeed.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000y356pvbhcxfp9@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            According to one source with knowledge of the situation at xAI, Musk has \u201cbeen unhappy about over-censoring\u201d on Grok \u201cfor a long time.\u201d A second source with knowledge of the situation at X said staffers consistently raised concerns internally and to Musk about overall inappropriate content created by Grok.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx000z356p2mjyouqx@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            At one meeting in recent weeks before the latest controversy erupted, Musk held a meeting with xAI staffers from various teams where he \u201cwas really unhappy\u201d over restrictions on Grok\u2019s Imagine image and video generator, the first source with knowledge of the situation at xAI said.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0010356p5cq9a4h9@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Around the time of the meeting with Musk, three xAI staffers who had worked on the company\u2019s already small safety team publicly announced on X that they were leaving the company \u2013 Vincent Stark, head of product safety; Norman Mu, who led the post-training and reasoning safety team; and Alex Chen, who led personality and model behavior post training.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0011356pvj9lx28p@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The source also questioned whether xAI was still using external tools such as Thorn and Hive to check for possible Child Sexual Abuse Material (CSAM). Relying on Grok for those checks instead could be riskier, the source said. (A Thorn spokesperson said they no longer work directly with X; Hive did not respond to a request for comment.)\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0012356plmdcdb73@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The safety team at X also has little to no oversight over what Grok posts publicly, according to sources who work on X and xAI.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0013356pa5c524c7@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            In November, The Information <a href=\"https:\/\/www.theinformation.com\/articles\/twins-pushing-elon-musks-plans-replace-x-staff-grok\" target=\"_blank\" rel=\"nofollow noopener\">reported <\/a>that X laid off half of the engineering team that worked in part on trust and safety issues. The Information also reported that staff at X were specifically concerned that Grok\u2019s image generation tool \u201ccould lead to the spread of illegal or otherwise harmful images.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0014356pe9y26x8m@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            xAI did not respond to requests for comment, beyond an automated email to all press inquiries stating: \u201cLegacy Media Lies.\u201d\n    <\/p>\n<p>        Guardrails and legal fallout<\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s2hfa0000356p453tq08n@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Grok is not the only AI model that has had issues with non-consensual AI-generated images of minors.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0016356pvvh2iok1@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Researchers have found AI-generated videos showing what appear to be minors in sexualized clothing or positions on <a href=\"https:\/\/www.cnn.com\/2025\/12\/11\/tech\/tiktok-ai-videos-children-report\" rel=\"nofollow noopener\" target=\"_blank\">TikTok<\/a> and <a href=\"https:\/\/www.wired.com\/story\/people-are-using-sora-2-to-make-child-fetish-content\/\" target=\"_blank\" rel=\"nofollow noopener\">on ChatGPT\u2019s Sora app<\/a>. TikTok says it has a zero tolerance policy for content that \u201cshows, promotes or engages in youth sexual abuse or exploitation.\u201d OpenAI says it \u201cstrictly prohibits any use of our models to create or distribute content that exploits or harms children.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0017356pdfuuefcc@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Guardrails that would have prevented the AI-generated imagery on Grok exist, said Steven Adler, a former AI Safety researcher at OpenAI.\n    <\/p>\n<p>       <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/2025-12-18t232035z-435819157-rc2mkhac5stx-rtrmadp-3-usa-trump-tiktok.JPG\" alt=\"The TikTok app icon on a smartphone on October 27, 2025.\" class=\"image_large__dam-img image_large__dam-img--loading\" onload=\"this.classList.remove('image_large__dam-img--loading')\" onerror=\"imageLoadError(this)\" height=\"2001\" width=\"3000\" loading=\"lazy\"\/><\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0018356pd7mmrj0h@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cYou can absolutely build guardrails that scan an image for whether there is a child in it and make the AI then behave more cautiously. But the guardrails have costs.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx0019356pimplzfpi@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Those costs, Adler said, include slowing down response times, increasing the number of computations and sometimes the model rejecting non-problematic requests.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001a356p8jcj8n6l@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Authorities in Europe, India and Malaysia have launched investigations over Grok-generated images.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001b356pksq6s3cz@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Britain\u2019s media regulator, OFCOM, has said it has made \u201curgent contact\u201d with Musk\u2019s firms about \u201cvery serious concerns\u201d with the Grok feature that \u201cproduces undressed images of people and sexualised images of children.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001c356pwrk398zf@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            At a\u202f<a href=\"https:\/\/nam11.safelinks.protection.outlook.com\/?url=https%3A%2F%2Faudiovisual.ec.europa.eu%2Fen%2Fmedia%2Fvideo%2FI-282956&amp;data=05%7C02%7CHadas.Gold%40cnn.com%7C1c6e1892178d4717dc8d08de4d1c23ec%7C0eb48825e8714459bc72d0ecd68f1f39%7C0%7C0%7C639032980404967835%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=MV0o4demnXk0QqT2SIM4J5Vl%2BtxZaksiEZgeTGBwzj8%3D&amp;reserved=0\" target=\"_blank\" rel=\"nofollow noopener\">press conference<\/a>\u202fon Monday,\u202fEuropean Commission spokesperson Thomas Regnier said the authority is \u201cvery seriously looking into\u201d reports of X and Grok\u2019s \u201cspicy mode showing explicit sexual content with some output generated with childlike images.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001d356ppws48w45@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cThis is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe,\u201d he said.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001e356pcsz03hr0@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The Malaysian Communications and Multimedia Commission (<a href=\"https:\/\/nam11.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fmcmc.gov.my%2Fskmmgovmy%2Fmedia%2FGeneral%2Fpdf2%2FMEDIA_STATEMENT_-MISUSE_OF_AI_TO_GENERATE_HARMFUL_CONTENT_IS_AN_OFFENCE%25E2%2580%2593MCMC.pdf&amp;data=05%7C02%7CHadas.Gold%40cnn.com%7C1c6e1892178d4717dc8d08de4d1c23ec%7C0eb48825e8714459bc72d0ecd68f1f39%7C0%7C0%7C639032980404985829%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=KFhZLk9x1H%2FzJACPJWRZ5CciducxerPtsZy8yKNLnkc%3D&amp;reserved=0\" target=\"_blank\" rel=\"nofollow noopener\">MCMC<\/a>) says it\u2019s investigating the issue.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001f356p7pag1jdu@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            And last week, India\u2019s\u202f<a href=\"https:\/\/nam11.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fx.com%2FPTI_News%2Fstatus%2F2007092496904618061%2Fphoto%2F1&amp;data=05%7C02%7CHadas.Gold%40cnn.com%7C1c6e1892178d4717dc8d08de4d1c23ec%7C0eb48825e8714459bc72d0ecd68f1f39%7C0%7C0%7C639032980405003094%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=DHR7TH1YX6HrJibsIrLzi5Ywq3oNvaKPJsUHSur0pMc%3D&amp;reserved=0\" target=\"_blank\" rel=\"nofollow noopener\">Ministry of Electronics and Information Technology<\/a>\u202fordered X to \u201cimmediately undertake a comprehensive, technical, procedural and governance-level review of\u2026 Grok.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001g356pn6qzo5tq@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            In the United States, AI platforms that produce problematic images of children could be at legal risk, said Riana Pfefferkorn, an attorney and policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. While the law known as Section 230 has long protected tech companies from third-party generated content hosted on their platforms, such as posts by social media users, it has never barred enforcements of federal crimes, including CSAM.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001h356p20alawar@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            And people depicted in the images could also bring civil suits, she said.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001i356pknzpkcle@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cThis Grok story in recent days makes xAI look more like those deepfake nude sites than what would otherwise be xAI\u2019s brethren and competitors in the form of Open AI and Meta,\u201d Pfefferkorn said.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4s5ptx001j356ph2kmtt6q@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The \u201c<a href=\"https:\/\/www.cnn.com\/2025\/05\/19\/tech\/ai-explicit-deepfakes-trump-sign-take-it-down-act\" rel=\"nofollow noopener\" target=\"_blank\">Take It Down Act,<\/a>\u201d signed last year by President Donald Trump, makes it illegal to share online nonconsensual, explicit images \u2014 real or computer-generated \u2014 and requires tech platforms remove such images within 48 hours of being notified.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk5laq2800013b6pmb23amg8@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            When asked about the images on Grok, a Justice Department spokesperson told CNN the department \u201ctakes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmk4si53h001y356pdklhgngj@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            CNN\u2019s Lianne Kolirin contributed to this report.\n    <\/p>\n","protected":false},"excerpt":{"rendered":"Elon Musk\u2019s AI chatbot, Grok, has\u202fbeen flooded with sexual images of mainly women, many of them real people.&hellip;\n","protected":false},"author":2,"featured_media":396386,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-396385","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/396385","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=396385"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/396385\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/396386"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=396385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=396385"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=396385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}