{"id":547575,"date":"2026-03-19T21:26:37","date_gmt":"2026-03-19T21:26:37","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/547575\/"},"modified":"2026-03-19T21:26:37","modified_gmt":"2026-03-19T21:26:37","slug":"openais-safety-pledges-in-the-wake-of-tumbler-ridge-arent-ai-regulation-theyre-surveillance","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/547575\/","title":{"rendered":"OpenAI\u2019s safety pledges in the wake of Tumbler Ridge aren\u2019t AI regulation \u2014 they\u2019re surveillance"},"content":{"rendered":"<p>In a span of two days following news that the Tumbler Ridge perpetrator\u2019s ChatGPT account had been flagged prior to the shooting, OpenAI CEO Sam Altman <a href=\"https:\/\/www.cbc.ca\/news\/politics\/evan-solomon-open-ai-meeting-ceo-sam-altman-9.7114767\" rel=\"nofollow noopener\" target=\"_blank\">met with Federal AI Minister Evan Solomon<\/a> and <a href=\"https:\/\/www.cbc.ca\/news\/canada\/british-columbia\/sam-altman-david-eby-meeting-9.7116693\" rel=\"nofollow noopener\" target=\"_blank\">British Colombia Premier David Eby<\/a>. <\/p>\n<p>He secured commitments on both sides: reporting threats directly to the RCMP, retroactive review of previously flagged accounts, distress-redirect protocols, access to the company\u2019s safety office for Canadian experts and an agreement to work with B.C. on regulatory recommendations to Ottawa. <\/p>\n<p>He also agreed to <a href=\"https:\/\/globalnews.ca\/news\/11718464\/bc-premier-ai-ceo-apology-tumbler-ridge\/\" rel=\"nofollow noopener\" target=\"_blank\">apologize to the community of Tumbler Ridge<\/a>, where 18-year-old Jesse Van Rootselaar killed eight people and wounded many others before dying of a self-inflicted wound. Months prior to the shooting, Van Rootselaar\u2019s ChatGPT account had been flagged for scenarios involving gun violence. The account was banned, but not reported to law enforcement.<\/p>\n<p>OpenAI\u2019s new commitments are significant gestures. But they resolve a narrower question than the one Tumbler Ridge actually raised. As <a href=\"https:\/\/theconversation.com\/danger-was-flagged-but-not-reported-what-the-tumbler-ridge-tragedy-reveals-about-canadas-ai-governance-vacuum-276718\" rel=\"nofollow noopener\" target=\"_blank\">I argued earlier<\/a>, the core problem was not a reporting failure. It was a governance vacuum. <\/p>\n<p>What\u2019s changed since? OpenAI has agreed to make the same type of unilateral determination it made before, but to act on it more aggressively, routing the result directly to the RCMP. That is not a fix. It is the same unaccountable architecture with a faster trigger.<\/p>\n<p>            <img decoding=\"async\" alt=\"A man in a blue suit standing at a microphone with other people in the background\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/03\/file-20260317-58-tpvoxf.JPG\" class=\"native-lazy\" loading=\"lazy\"  \/><\/p>\n<p>              AI Minister Evan Solomon talks about interactions with OpenAI regarding the use of ChatGPT by the shooter in the Tumbler Ridge mass shooting, in the foyer of the House of Commons on Parliament Hill in Ottawa, in February 2026.<br \/>\n              THE CANADIAN PRESS\/Justin Tang<\/p>\n<p>The human-in-the-loop fallacy<\/p>\n<p>Consider what we now know about the internal process. The shooter\u2019s account was flagged. Human moderators reviewed the interactions. Some advocated escalating to law enforcement. Other humans, guided by the company\u2019s own opaque thresholds, decided against it. The breakdown was not mechanical. It was institutional.<\/p>\n<p>\u201cHuman in the loop\u201d is one of the <a href=\"https:\/\/doi.org\/10.1016\/j.nbt.2024.12.003\" rel=\"nofollow noopener\" target=\"_blank\">most repeated reassurances in AI safety discourse<\/a>. The Tumbler Ridge case exposes its limits. Humans in the loop are only as accountable as the institutional structure around them. When that structure is a private corporation with no legally binding reporting obligations, no transparency requirements and no external oversight, the human in the loop is simply a more sympathetic face on an unaccountable system. <\/p>\n<p>OpenAI has since announced that its thresholds have been updated. But updated by whom, according to what criteria, subject to what review? These remain internal decisions, invisible to the public and unreachable by Parliament.<\/p>\n<p>The surveillance substitution<\/p>\n<p>There is a deeper problem that receives almost no attention. The proposed settlement does not regulate AI. It regulates users.<\/p>\n<p>The entire apparatus being constructed (internal threat identification, flagging, direct RCMP referral) is oriented toward monitoring what people say to AI, not toward how AI systems are designed, trained or constrained in their responses. <\/p>\n<p>True AI regulation asks whether a model might facilitate or amplify harmful ideation through its interaction patterns. It asks how the system is built, what it\u2019s tested for and what obligations attach to its deployment. <\/p>\n<p>The current arrangement asks none of these questions. Instead, it builds a pipeline from private AI interactions to law enforcement, administered by a corporation, governed by proprietary policy.<\/p>\n<p>I call this the surveillance substitution: a governance vacuum gets filled not with democratic regulation, but with corporate surveillance of users. It is not regulation of AI. It is regulation of the people who use AI, conducted by the AI company itself, with the police as the endpoint.<\/p>\n<p>The civil liberties implications are substantial. Research on compassion-sensitive AI, including my own work on <a href=\"https:\/\/doi.org\/10.3389\/fdgth.2023.1278186\" rel=\"nofollow noopener\" target=\"_blank\">how AI systems should respond to users in vulnerable states<\/a>, consistently shows that people disclose distress to chatbots precisely because the interaction feels private and non-judgmental. <\/p>\n<p>If that space becomes a monitored channel where concerning disclosures trigger law enforcement referrals based on opaque corporate criteria, the most vulnerable users may stop disclosing. The chilling effect on help-seeking behaviour has not been studied, and it has not been discussed in any of the public negotiations following Tumbler Ridge.<\/p>\n<p>            <img decoding=\"async\" alt=\"A building with two flagpoles with flags at half mast\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/03\/file-20260317-86-49y9s3.JPG\" class=\"native-lazy\" loading=\"lazy\"  \/><\/p>\n<p>              People pay their respects at a memorial on the steps of the town hall following a vigil the previous day in Tumbler Ridge, B.C., on Feb. 14, 2026.<br \/>\n              THE CANADIAN PRESS\/Christinne Muschi<\/p>\n<p>Rational strategy, absent framework<\/p>\n<p>It\u2019s important to be precise about what OpenAI is doing. The company is not acting in bad faith. It is behaving as a rational private entity in the absence of a regulatory framework, offering the minimum viable response to political pressure while preserving as much operational autonomy as possible.<\/p>\n<p>Look south and the logic becomes clearer. In the United States, the relationship between AI companies and government power is being forcibly renegotiated. The Pentagon has sought AI models with safety guardrails removed for military applications. <a href=\"https:\/\/thehill.com\/policy\/technology\/5763323-pentagon-stuns-silicon-valley-with-anthropic-ban\/\" rel=\"nofollow noopener\" target=\"_blank\">When Anthropic resisted, OpenAI moved to fill the gap<\/a>. In that context, the <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/04\/sam-altman-openai-pentagon\" rel=\"nofollow noopener\" target=\"_blank\">U.S. government commands and AI companies comply<\/a>. <\/p>\n<p>In Canada, the dynamic is inverted: OpenAI is not being commanded. It is volunteering concessions designed to pre-empt the kind of binding legislation that would actually constrain its operations. Support broad norms with no immediate legal force; resist specific domestic obligations that carry real consequences. This is how regulatory capture begins: not with corruption, but with convenience.<\/p>\n<p><a href=\"https:\/\/www.theglobeandmail.com\/business\/commentary\/article-openai-tumbler-ridge-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">Canada has genuine leverage here<\/a>: an unusual cross-party consensus that something must change, public attention that has given AI governance a human face, and a provincial government that understands the stakes. <\/p>\n<p>But leverage evaporates. If the federal government accepts OpenAI\u2019s pledges as a sufficient response, it normalizes corporate self-regulation as the baseline. Future companies will cite this arrangement as precedent. The window for legislation narrows.<\/p>\n<p>What durable governance requires<\/p>\n<p>The response that Tumbler Ridge demands is not more efficient surveillance of users. It is a regulatory architecture that addresses the systems themselves.<\/p>\n<p>That means binding legislation with legally defined thresholds for when AI companies must refer flagged interactions to authorities: thresholds defined by Parliament, developed with mental health professionals, privacy experts and law enforcement, not inherited from a company\u2019s terms of service. <\/p>\n<p>It means an independent triage body so that flagged interactions are assessed by professionals equipped to distinguish ideation from intent, accountable to public law rather than corporate liability. And it means model-level accountability: regulatory attention that moves upstream from users to systems. How are these models designed to respond to escalating disclosures of violent ideation? What testing obligations apply? What auditing requirements exist? <\/p>\n<p>These questions are absent from the current political negotiations, and their absence defines the limits of what the current pledges can achieve.<\/p>\n<p>OpenAI\u2019s commitments following Tumbler Ridge are the beginning of a conversation, not the end of one. Canada holds good cards. The question is whether it plays them, or lets the other side set the rules while the table is still being built.<\/p>\n","protected":false},"excerpt":{"rendered":"In a span of two days following news that the Tumbler Ridge perpetrator\u2019s ChatGPT account had been flagged&hellip;\n","protected":false},"author":2,"featured_media":547576,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-547575","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/547575","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=547575"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/547575\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/547576"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=547575"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=547575"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=547575"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}