{"id":369851,"date":"2026-01-14T19:02:07","date_gmt":"2026-01-14T19:02:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/369851\/"},"modified":"2026-01-14T19:02:07","modified_gmt":"2026-01-14T19:02:07","slug":"liz-kendalls-response-to-x-nudification-is-good-but-not-enough-to-solve-the-problem-nana-nwachukwu","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/369851\/","title":{"rendered":"Liz Kendall\u2019s response to X \u2018nudification\u2019 is good \u2013 but not enough to solve the problem | Nana Nwachukwu"},"content":{"rendered":"<p class=\"dcr-130mj7b\">On X, a woman posts a photo in a sari, and within minutes, various users are underneath the post tagging Grok to strip her down to a bikini. It is a shocking violation of privacy, but now a <a href=\"https:\/\/www.theguardian.com\/news\/ng-interactive\/2026\/jan\/11\/how-grok-nudification-tool-went-viral-x-elon-musk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">familiar and commonplace practice<\/a>. Between June 2025 and January 2026, I documented 565 instances of users requesting Grok to create nonconsensual intimate imagery. Of these, 389 were requested in just one day.<\/p>\n<p class=\"dcr-130mj7b\">Last Friday, after a backlash against the platform\u2019s ability to create such nonconsensual sexual images, X announced that Grok\u2019s AI image generation feature would <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/09\/grok-image-generator-outcry-sexualised-ai-imagery\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">only be available to subscribers.<\/a> Reports suggest that the bot <a href=\"https:\/\/www.telegraph.co.uk\/business\/2026\/01\/13\/musks-x-stops-bikini-bot-undressing-women\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">now no longer responds<\/a> to prompts to generate images of women in bikinis (although apparently will still do so for requests about men).<\/p>\n<p class=\"dcr-130mj7b\">But as the technology secretary, Liz Kendall, rightly states, this action \u201cdoes not go anywhere near far enough\u201d. Kendall has announced that creating nonconsensual intimate images will <a href=\"https:\/\/www.theguardian.com\/politics\/live\/2026\/jan\/12\/grok-x-nudification-technology-online-safety-labour-reform-tories-lib-dems-uk-politics-latest-news-updates\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">become a criminal offence<\/a> this week, and that she will criminalise the supply of nudification apps. This is appropriate, given X\u2019s weak response. Placing the feature behind a paywall means that the platform can more directly profit off the online dehumanisation and sexual harassment of women and minors. And stopping the \u201cbikini\u201d responses after public censure and the threat of legislation is the least X can do \u2013 the bigger question is why it was even possible in the first place.<\/p>\n<p class=\"dcr-130mj7b\">These measures are a step forward. The shadow technology secretary, Julia Lopez, suggested in her response that the government was overreacting, that this was just \u201ca modern-day iteration of an old problem\u201d, no different from crude drawings or Photoshop. She\u2019s wrong. The scale is different. The accessibility is different. The speed is different. With Photoshop, there is a technical skill required, as well as direct publication by a user, which places all actions except platform provision squarely on them. In this case, though, the user makes a regular text reply with a request, and Grok generates and publishes criminal abuse to a massive audience.<\/p>\n<p class=\"dcr-130mj7b\">Kendall\u2019s approach criminalises users who create or alter these images, and the companies that supply dedicated nudification tools. That\u2019s where it misses the point. Grok and most prominent image-generation tools are not dedicated nudification tools. They are general-purpose AI with weak safeguards. Kendall is not asking platforms to implement proactive detection. The law waits for harm to happen, then punishes.<\/p>\n<p class=\"dcr-130mj7b\">The drawbacks of this approach are obvious. I observed this material being generated for months before the mainstream backlash began. These are harmful images that were generated that still exist, and perhaps were saved and shared across other platforms. For the victims of this AI sexual abuse material, regulation after the fact won\u2019t help. For harm that is structurally amplified in this manner, the approach must be preventive, not reactionary.<\/p>\n<p class=\"dcr-130mj7b\">Another more fundamental problem is that while the UK pushes AI safety regulation, the US is moving in the opposite direction. The <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/12\/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Trump administration<\/a> wants to \u201cenhance the United States\u2019 global AI dominance through a minimally burdensome national policy framework for AI\u201d. Under this framework, there is little incentive for American AI companies to regulate misuse of their products. This matters because AI regulation is incomplete without cross-border collaboration. Kendall can criminalise users in the UK, she can threaten to ban X entirely. But she cannot stop Grok from being programmed in San Francisco. She cannot force OpenAI or Anthropic or any other US company to prioritise safety over speed. Without US cooperation, we are trying to regulate a transnational technology with national laws.<\/p>\n<p class=\"dcr-130mj7b\">While this wrangling over regulation and updating policy plays out, many victims, and other women online, will be wondering what this new era of AI-enabled online sexual harassment means for them, and questioning their participation on global social media platforms. If my image has been digitally altered, how do I get justice if the perpetrator is halfway across the world? Transparency in the practices of AI companies is <a href=\"https:\/\/news.stanford.edu\/stories\/2025\/12\/foundation-model-transparency-index-ai-companies-information\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">in decline<\/a> \u2013 so how could the same companies be trusted to be accountable and audit systems that reproduce harm?<\/p>\n<p class=\"dcr-130mj7b\">The truth is that these companies cannot be trusted. This is why globally, regulation needs to shift from \u201cremove harm when you find it\u201d to \u201cprove that your system prevents harm\u201d. We must code power into the process by requiring mandatory input filtering, independent audits and licensing conditions that make prevention a legal technical requirement. This means it may catch the harm before it materialises, enabling regulation to minimise harmful behaviour by these AI companies before their products are deployed. This is the type of work that we at the <a href=\"https:\/\/aial.ie\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">AI Accountability Lab <\/a>in the <a href=\"https:\/\/www.adaptcentre.ie\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Adapt Centre<\/a> at Trinity College Dublin are pushing forward through our research.<\/p>\n<p class=\"dcr-130mj7b\">Regulation after the fact is better than nothing. However, it offers little to the victims who have already been harmed, and sidesteps the conspicuous absence of law enforcement in addressing these platform harms.<\/p>\n","protected":false},"excerpt":{"rendered":"On X, a woman posts a photo in a sari, and within minutes, various users are underneath the&hellip;\n","protected":false},"author":2,"featured_media":369852,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-369851","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/369851","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=369851"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/369851\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/369852"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=369851"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=369851"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=369851"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}