{"id":387424,"date":"2026-01-02T19:16:07","date_gmt":"2026-01-02T19:16:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/387424\/"},"modified":"2026-01-02T19:16:07","modified_gmt":"2026-01-02T19:16:07","slug":"elon-musks-grok-ai-generates-images-of-minors-in-minimal-clothing-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/387424\/","title":{"rendered":"Elon Musk\u2019s Grok AI generates images of \u2018minors in minimal clothing\u2019 | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\"><a href=\"https:\/\/www.theguardian.com\/technology\/elon-musk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Elon Musk<\/a>\u2019s chatbot Grok posted on Friday that lapses in safeguards had led it to generate \u201cimages depicting minors in minimal clothing\u201d on social media platform X. The <a href=\"https:\/\/www.theguardian.com\/technology\/chatbots\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">chatbot<\/a>, a product of Musk\u2019s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.<\/p>\n<p class=\"dcr-130mj7b\">Screenshots shared by users on <a href=\"https:\/\/www.theguardian.com\/technology\/twitter\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">X<\/a> showed Grok\u2019s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThere are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,\u201d Grok said in a <a href=\"https:\/\/x.com\/grok\/status\/2006618883055046758\" data-link-name=\"in body link\" rel=\"nofollow\">post<\/a> on X in response to a user. \u201cxAI has safeguards, but improvements are ongoing to block such requests entirely.\u201d<\/p>\n<p class=\"dcr-130mj7b\">\u201cAs noted, we\u2019ve identified lapses in safeguards and are urgently fixing them\u2014CSAM is illegal and prohibited,\u201d xAI <a href=\"https:\/\/x.com\/grok\/status\/2007006470689214749\" data-link-name=\"in body link\" rel=\"nofollow\">posted<\/a> to the @Grok account on X, referring to child sexual abuse material.<\/p>\n<p class=\"dcr-130mj7b\">Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people\u2019s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.<\/p>\n<p class=\"dcr-130mj7b\">Grok\u2019s generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said \u201cno system is 100% foolproof\u201d, adding that xAI was prioritising improvements and reviewing details shared by users.<\/p>\n<p class=\"dcr-130mj7b\">When contacted for comment by email, xAI replied with the message: \u201cLegacy Media Lies\u201d.<\/p>\n<p class=\"dcr-130mj7b\">The problem of AI being used to generate child sexual abuse material is a longstanding issue in the <a href=\"https:\/\/www.theguardian.com\/technology\/artificialintelligenceai\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a> industry. A 2023 Stanford study <a href=\"https:\/\/www.axios.com\/2023\/12\/20\/ai-training-data-child-abuse-images-stanford\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">found<\/a> that a dataset used to train a number of popular AI image-generation tools contained over 1000 CSAM images. Training AI on images of child abuse can allow models to generate new images of children being exploited, experts say.<\/p>\n<p class=\"dcr-130mj7b\">Grok also has a history of failing to maintain its safety guardrails and posting misinformation. In May of last year, Grok began posting about the <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/may\/14\/elon-musk-grok-white-genocide\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">far-right conspiracy of \u201cwhite genocide\u201d<\/a> in South Africa on posts with no relation to the concept. xAI also apologized in July after Grok began posting rape fantasies and <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/09\/grok-ai-praised-hitler-antisemitism-x-ntwnfb\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">antisemitic material<\/a>, including <a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/jul\/12\/elon-musk-grok-antisemitic\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">calling itself \u201cMechaHitler\u201d<\/a> and praising Nazi ideology. The company nevertheless <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/14\/us-military-xai-deal-elon-musk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">secured<\/a> a nearly $200m contract with the US Department of Defense a week after the incidents.<\/p>\n","protected":false},"excerpt":{"rendered":"Elon Musk\u2019s chatbot Grok posted on Friday that lapses in safeguards had led it to generate \u201cimages depicting&hellip;\n","protected":false},"author":2,"featured_media":387425,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-387424","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/387424","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=387424"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/387424\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/387425"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=387424"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=387424"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=387424"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}