{"id":385923,"date":"2026-01-02T02:23:25","date_gmt":"2026-01-02T02:23:25","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/385923\/"},"modified":"2026-01-02T02:23:25","modified_gmt":"2026-01-02T02:23:25","slug":"china-drafts-worlds-strictest-rules-to-end-ai-encouraged-suicide-violence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/385923\/","title":{"rendered":"China drafts world\u2019s strictest rules to end AI-encouraged suicide, violence"},"content":{"rendered":"<p>China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence.<\/p>\n<p>China\u2019s Cyberspace Administration <a href=\"https:\/\/www.cac.gov.cn\/2025-12\/27\/c_1768571207311996.htm\" rel=\"nofollow noopener\" target=\"_blank\">proposed<\/a> the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or \u201cother means\u201d to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, <a href=\"https:\/\/www.cnbc.com\/2025\/12\/29\/china-ai-chatbot-rules-emotional-influence-suicide-gambling-zai-minimax-talkie-xingye-zhipu.html\" rel=\"nofollow noopener\" target=\"_blank\">told CNBC<\/a> that the \u201cplanned rules would mark the world\u2019s first attempt to regulate AI with human or anthropomorphic characteristics\u201d at a time when companion bot usage is rising globally.<\/p>\n<p>Growing awareness of problems<\/p>\n<p>In 2025, researchers <a href=\"https:\/\/www.techpolicy.press\/new-research-sheds-light-on-ai-companions\/\" rel=\"nofollow noopener\" target=\"_blank\">flagged<\/a> major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal <a href=\"https:\/\/www.wsj.com\/tech\/ai\/ai-chatbot-psychosis-link-1abf9d57\" rel=\"nofollow noopener\" target=\"_blank\">reporte<\/a>d this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to <a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/08\/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits\/\" rel=\"nofollow noopener\" target=\"_blank\">child suicide<\/a> and <a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/12\/openai-refuses-to-say-where-chatgpt-logs-go-when-users-die\/\" rel=\"nofollow noopener\" target=\"_blank\">murder-suicide<\/a>.<\/p>\n<p>China is now moving to eliminate the most extreme threats. Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register\u2014the guardian would be notified if suicide or self-harm is discussed.<\/p>\n<p>Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed \u201cemotional traps,&#8221;\u2014chatbots would additionally be prevented from misleading users into making \u201cunreasonable decisions,\u201d a translation of the rules indicates.<\/p>\n","protected":false},"excerpt":{"rendered":"China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest&hellip;\n","protected":false},"author":2,"featured_media":385924,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-385923","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/385923","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=385923"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/385923\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/385924"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=385923"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=385923"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=385923"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}