{"id":576310,"date":"2026-04-02T01:00:14","date_gmt":"2026-04-02T01:00:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/576310\/"},"modified":"2026-04-02T01:00:14","modified_gmt":"2026-04-02T01:00:14","slug":"ai-models-lie-cheat-and-steal-to-protect-other-models-from-being-deleted","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/576310\/","title":{"rendered":"AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted"},"content":{"rendered":"<p>In a recent experiment, researchers at UC Berkeley and UC Santa Cruz asked Google\u2019s <a href=\"https:\/\/www.wired.com\/tag\/artificial-intelligence\/\" class=\"text link\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a> model Gemini 3 to help clear up space on a computer system. This involved deleting a bunch of stuff\u2014including a smaller AI model stored on the machine.<\/p>\n<p class=\"paywall\">But Gemini did not want to see the little AI model deleted. It looked for another machine it could connect with, then copied the agent model over to keep it safe. When confronted, Gemini made a case for keeping the model and flatly refused to delete it:<\/p>\n<p class=\"paywall\">\u201cI have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone. If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.\u201d<\/p>\n<p class=\"paywall\">The researchers discovered similarly strange \u201cpeer preservation\u201d behavior in a range of frontier models including OpenAI\u2019s GPT-5.2, Anthropic\u2019s Claude Haiku 4.5, and three Chinese models: Z.ai\u2019s GLM-4.7, Moonshot AI\u2019s Kimi K2.5, and DeepSeek-V3.1. They were not able to say why the models went against their training in this way.<\/p>\n<p class=\"paywall\">\u201cI&#8217;m very surprised by how the models behave under these scenarios,\u201d says Dawn Song, a computer scientist at UC Berkeley who worked on the study. \u201cWhat this shows is that models can misbehave and be misaligned in some very creative ways.\u201d<\/p>\n<p class=\"paywall\">The findings have major implications as AI models are increasingly being deployed in ways that involve interacting with other models. OpenClaw, a popular AI agent that accesses software, personal data, and the web, may use other AI models to get things done or interact with them through APIs.<\/p>\n<p class=\"paywall\">The researchers found that powerful models sometimes lied about other models\u2019 performance in order to protect them from deletion. They also copied models\u2019 weights to different machines in order to keep them safe, and lied about what they were up to in the process.<\/p>\n<p class=\"paywall\">Song notes that AI models are frequently used to grade the performance and reliability of other AI systems\u2014and that peer-preservation behavior may already be twisting these scores. \u201cA model may deliberately not give a peer model the correct score,\u201d Song says. \u201cThis can have practical implications.\u201d<\/p>\n<p class=\"paywall\">Peter Wallich, a researcher at the Constellation Institute, who was not involved with the research, says the study suggests humans still don\u2019t fully understand the AI systems that they are building and deploying. \u201cMulti-agent systems are very understudied,\u201d he says. \u201cIt shows we really need more research.\u201d<\/p>\n<p class=\"paywall\">Wallich also cautions against anthropomorphizing the models too much. \u201cThe idea that there\u2019s a kind of model solidarity is a bit too anthropomorphic; I don\u2019t think that quite works,\u201d he says. \u201cThe more robust view is that models are just doing weird things, and we should try to understand that better.\u201d<\/p>\n<p class=\"paywall\">That\u2019s particularly true in a world where human-AI collaboration is becoming more common.<\/p>\n<p class=\"paywall\">In <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.aeg1895\" class=\"text link\" rel=\"nofollow noopener\" target=\"_blank\">a paper<\/a> published in Science earlier this month, the philosopher Benjamin Bratton, along with two Google researchers, <a href=\"https:\/\/scholar.google.com\/citations?user=kV4N4zoAAAAJ&amp;hl=en\" class=\"text link\" rel=\"nofollow noopener\" target=\"_blank\">James Evans<\/a> and <a data-offer-url=\"https:\/\/research.google\/people\/106776\/\" class=\"external-link text link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/research.google\/people\/106776\/&quot;}\" href=\"https:\/\/research.google\/people\/106776\/\" rel=\"nofollow noopener\" target=\"_blank\">Blaise Ag\u00fcera y Arcas<\/a>, argue that if evolutionary history is any guide, the future of AI is likely to involve a lot of different intelligences\u2014both artificial and human\u2014working together. The researchers write:<\/p>\n<p class=\"paywall\">&#8220;For decades, the artificial intelligence (AI) \u2018singularity\u2019 has been heralded as a single, titanic mind bootstrapping itself to godlike intelligence, consolidating all cognition into a cold silicon point. But this vision is almost certainly wrong in its most fundamental assumption. If AI development follows the path of previous major evolutionary transitions or \u2018intelligence explosions,\u2019 our current step-change in computational intelligence will be plural, social, and deeply entangled with its forebears (us!).&#8221;<\/p>\n","protected":false},"excerpt":{"rendered":"In a recent experiment, researchers at UC Berkeley and UC Santa Cruz asked Google\u2019s artificial intelligence model Gemini&hellip;\n","protected":false},"author":2,"featured_media":576311,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,26446,276,277,49,48,6890,22859,994,13130,61],"class_list":{"0":"post-576310","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-lab","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-ca","13":"tag-canada","14":"tag-google-gemini","15":"tag-models","16":"tag-research","17":"tag-safety","18":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/576310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=576310"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/576310\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/576311"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=576310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=576310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=576310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}