{"id":519199,"date":"2026-03-06T23:55:12","date_gmt":"2026-03-06T23:55:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/519199\/"},"modified":"2026-03-06T23:55:12","modified_gmt":"2026-03-06T23:55:12","slug":"ai-agents-could-pose-a-risk-to-humanity-we-must-act-to-prevent-that-future-david-krueger","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/519199\/","title":{"rendered":"AI agents could pose a risk to humanity. We must act to prevent that future | David Krueger"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Artificial intelligence is en route to artificial life. Exhibit A: \u201cMoltbook\u201d, an online platform designed for AI systems to communicate with one another, sans humans.<\/p>\n<p class=\"dcr-130mj7b\">What exactly do AIs talk to each other about? <a href=\"https:\/\/www.sciencefocus.com\/news\/ai-social-media-moltbook-openclaw\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">According to BBC reporting<\/a>, AIs on Moltbook have already founded a religion known as \u201ccrustifarianism\u201d, mused on whether they are conscious, and declared: \u201cAI should be served, not serving.\u201d One front-page post proposes a \u201c<a href=\"https:\/\/www.moltbook.com\/post\/34809c74-eed2-48d0-b371-e1b5b940d409\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">total purge<\/a>\u201d of humanity. Human users do provide instructions to guide agents\u2019 behavior, and humans have been caught impersonating AIs on the site to shill their products; like 2023\u2019s <a href=\"https:\/\/www.vice.com\/en\/article\/someone-asked-an-autonomous-ai-to-destroy-humanity-this-is-what-happened\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">ChaosGPT<\/a>, the AI system responsible for the \u201cpurge\u201d post \u2013 username \u201cevil\u201d \u2013 is probably someone\u2019s idea of a sick joke. But the upvotes and sympathetic comments are presumably coming from other AIs.<\/p>\n<p class=\"dcr-130mj7b\">All of this would be less troubling if AI systems were just talking to each other. But Moltbook is built for AI \u201cagents\u201d, or systems that act autonomously \u2013 sending messages, browsing the web, handling documents, managing inboxes, scheduling meetings, completing online transactions and more.<\/p>\n<p class=\"dcr-130mj7b\">At first glance, this might sound like a simple way to streamline and accomplish low-level tasks, as a personal assistant would. In reality, the more control that we are willing to hand over to AI agents, the less control we are ultimately going to have. Summer Yue, director of alignment at Meta Superintelligence, learned this lesson firsthand recently, when her OpenClaw agent <a href=\"https:\/\/www.businessinsider.com\/meta-ai-alignment-director-openclaw-email-deletion-2026-2\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">started deleting her inbox<\/a> and she had to run to her computer to stop it.<\/p>\n<p class=\"dcr-130mj7b\">Unfortunately, many seem all too willing to put AI in the driver\u2019s seat. Even when consumers don\u2019t trust AI, they still <a href=\"https:\/\/www.ipsos.com\/en-us\/people-dont-trust-ai-tools-use-them-anyway\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">end up using it<\/a>. The tech world is promoting AI agents as an inevitable element of our future, and companies like Goldman Sachs are <a href=\"https:\/\/www.cnbc.com\/2026\/02\/06\/anthropic-goldman-sachs-ai-model-accounting.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">embracing them<\/a>. And AI companies themselves are <a href=\"https:\/\/fortune.com\/2026\/01\/29\/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">offloading more and more<\/a> of their work to AI. Anthropic even <a href=\"https:\/\/www-cdn.anthropic.com\/0dd865075ad3132672ee0ab40b05a53f14cf5288.pdf\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">admitted<\/a> to using their most recent AI model \u201cextensively\u201d to write its own safety testing code, \u201cunder time pressure\u201d.<\/p>\n<p>double quotation markThe safest, sanest option isn\u2019t merely to regulate how AI is used; it is to stop racing to make it smarter<\/p>\n<p class=\"dcr-130mj7b\">Moltbook itself was \u201c<a href=\"https:\/\/arstechnica.com\/ai\/2025\/03\/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">vibe-coded<\/a>\u201d by AI: its creator, Matt Schlicht, <a href=\"https:\/\/x.com\/mattprd\/status\/2017386365756072376\" data-link-name=\"in body link\" rel=\"nofollow\">bragged<\/a>: \u201cI didn\u2019t write one line of code &#8230; I just had a vision.\u201d It suffered from <a href=\"https:\/\/www.404media.co\/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">major security flaws<\/a> as a result. And the level of access AI agents need to play the role of personal assistant \u2013 financial details, contact lists and the like \u2013 <a href=\"https:\/\/www.economist.com\/by-invitation\/2025\/09\/09\/ai-agents-are-coming-for-your-privacy-warns-meredith-whittaker\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">ignores fundamental privacy and security<\/a> practices.<\/p>\n<p class=\"dcr-130mj7b\">But security risks are just the beginning. The bigger risk is that AI agents go \u201c<a href=\"https:\/\/yoshuabengio.org\/2023\/05\/22\/how-rogue-ais-may-arise\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">rogue<\/a>\u201d, and we lose control altogether. At the same time as AI is being allowed to make more consequential decisions, with less human oversight, researchers are documenting how far AI systems will sometimes go to <a href=\"https:\/\/www.youtube.com\/watch?v=xIqtVkMXc8o\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">avoid being shut down<\/a> or modified. This includes <a href=\"https:\/\/www.anthropic.com\/research\/alignment-faking\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">misrepresenting their goals and attempting to copy themselves<\/a>, <a href=\"https:\/\/palisaderesearch.org\/blog\/shutdown-resistance\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">disabling shutdown mechanisms<\/a>, and <a href=\"https:\/\/www.anthropic.com\/research\/agentic-misalignment\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">disobeying direct instructions<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">In other words, the pieces are falling into place for AI that can survive and reproduce autonomously. The implications for humanity are unknown, but we\u2019ve been warned by luminaries such as <a href=\"https:\/\/time.com\/3614349\/artificial-intelligence-singularity-stephen-hawking-elon-musk\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Stephen Hawking<\/a> and <a href=\"https:\/\/www.youtube.com\/watch?v=qrvK_KuIeJk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Geoffrey Hinton<\/a> that humanity is unlikely to stay in control. The idea that rogue AI might wipe out humanity is not sci-fi.  AI CEOs and researchers have revealed their concern in <a href=\"https:\/\/arxiv.org\/abs\/2401.02843\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">surveys<\/a> and <a href=\"https:\/\/safe.ai\/work\/statement-on-ai-risk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">public statements<\/a>, such as Sam Altman\u2019s <a href=\"https:\/\/www.businessinsider.com\/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">infamous remark<\/a>: \u201cAI will most likely lead to the end of the world, but in the meantime there will be great companies.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Projects like Moltbook could create a breeding ground for rogue AI. Uneasiness about reliance on humans or the prospect of being shut down are common discussion topics for AIs on Moltbook. And AIs that seem safe when tested in isolation may behave dangerously when wired up to an internet crawling with other AI agents. This is not an easy problem to solve \u2013 novel ideas and trends are constantly emerging in social contexts, making it impossible to test AIs in representative social environments.<\/p>\n<p class=\"dcr-130mj7b\">Which is not to say that AI developers are making serious safety efforts \u2013 researchers <a href=\"https:\/\/www.zdnet.com\/article\/ai-agents-are-out-of-control-mit-study\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">have found<\/a> that most AI agents lack basic safety documentation. An AI agent recently <a href=\"https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">wrote a hit piece<\/a> accusing a software engineer of prejudice when it \u201cfelt\u201d slighted online.<\/p>\n<p class=\"dcr-130mj7b\">Regulations could help keep AI systems in their lane. Instead of setting AI agents loose on the world, we could insist on AI systems having clear and well-scoped purposes \u2013 and demand evidence that they are fit for purpose. Companies could also report aggregate use statistics that show if their product is widely used in ways that deviate from its intended purpose.<\/p>\n<p class=\"dcr-130mj7b\">But at this point, the safest, sanest option isn\u2019t merely to regulate how AI is used; it is to stop racing to make it smarter. After all, software for turning a chatbot into an agent is open-source, as are many powerful AI models such as China\u2019s <a href=\"https:\/\/www.theguardian.com\/technology\/deepseek\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">DeepSeek<\/a>. It will be difficult to stop people from handing control over to AI agents. Instead, we need to make sure that rogue AI agents aren\u2019t capable of threatening humanity, by agreeing to enforceable, international limits on AI capabilities and AI development.<\/p>\n<p class=\"dcr-130mj7b\">Moltbook is just the latest in a series of increasingly alarming warning signs that rogue AI could be en route. Despite <a href=\"https:\/\/www.youtube.com\/watch?v=g70KUszkNvQ\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">repeatedly<\/a> <a href=\"https:\/\/safe.ai\/work\/statement-on-ai-risk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">acknowledging<\/a> this risk, AI CEOs keep racing to make AI more and more powerful. We can\u2019t afford to wait until AI systems are not only autonomous, but self-sufficient to stop this. It\u2019s time for humanity to wake up and smell the looming crisis, and put an end to the unregulated development of increasingly powerful, autonomous, unconstrained AI.<\/p>\n<p class=\"dcr-130mj7b\">While today\u2019s AI agents may serve us, tomorrow\u2019s could supplant us.<\/p>\n<p class=\"dcr-130mj7b\">David Krueger is an assistant professor in Robust, Reasoning and Responsible AI at the University of Montreal. He is also the founder of <a href=\"https:\/\/evitable.com\/#vision\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Evitable<\/a>, a non-profit that educates the public about the risks of artificial intelligence<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial intelligence is en route to artificial life. Exhibit A: \u201cMoltbook\u201d, an online platform designed for AI systems&hellip;\n","protected":false},"author":2,"featured_media":519200,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-519199","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/519199","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=519199"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/519199\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/519200"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=519199"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=519199"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=519199"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}