{"id":508470,"date":"2026-03-02T01:11:18","date_gmt":"2026-03-02T01:11:18","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/508470\/"},"modified":"2026-03-02T01:11:18","modified_gmt":"2026-03-02T01:11:18","slug":"what-do-ai-firms-do-when-users-tell-chatbots-their-dark-violent-thoughts","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/508470\/","title":{"rendered":"What do AI firms do when users tell chatbots their dark, violent thoughts?"},"content":{"rendered":"<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/LKUZU5RJG5B3JF7ECM75ZBG3T4.jpg?auth=18aa2926a96036f5208e45e22fe73ca9c0ae6bd5f5072f2c79185f430d8887f2&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"0\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">iStockPhoto \/ Getty Images<\/p>\n<p class=\"c-article-body__text text-pr-5\">When it comes to the dangers posed by chatbots, much of the conversation has centred on the potential harms caused by what these applications can tell us. Chatbots powered by artificial intelligence can be sycophantic and reinforce our worldview by mirroring our language and thoughts. Those traits are at issue in instances of <a href=\"https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">teen suicide<\/a> and for people who say they have <a href=\"https:\/\/www.nytimes.com\/2025\/08\/08\/technology\/ai-chatbots-delusions-chatgpt.html\" rel=\"nofollow noopener\" target=\"_blank\">experienced delusions<\/a> from talking to chatbots at length.<\/p>\n<p class=\"c-article-body__text text-pr-5\">The case involving the Tumbler Ridge, B.C., shooter appears to involve the reverse \u2013 information a user confided to a chatbot. <\/p>\n<p class=\"c-article-body__text text-pr-5\">What exactly 18-year-old Jesse Van Rootselaar discussed with OpenAI\u2019s ChatGPT months before fatally shooting eight people on Feb. 10 and then killing herself has not been disclosed, nor do we know what the chatbot said in reply. We do know that OpenAI had flagged her conversations but opted not to contact law enforcement last summer.<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/BUHKXQFVZZG4XFQL232XNLQ3V4.JPG?auth=bd83fb2a60bb627b9888053f731d3cbbdf5002af17214d9503872520131e995e&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"1\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">The deaths of eight people in Tumbler Ridge, B.C., marked one of the worst mass shootings in recent Canadian history on Feb. 10.Jennfier Gauthier\/Reuters<\/p>\n<p class=\"c-article-body__text text-pr-5\">The incident has exposed a glaring oversight gap when it comes to AI, and how the rapidly advancing technology is presenting novel issues that defy easy answers. Under what circumstances, for example, should AI companies report potentially dangerous interactions to law enforcement?<\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cWhat really strikes me here is the revelation that OpenAI is recording potentially all user chats and sending chat logs to law enforcement on a selective and proactive basis,\u201d said Blair Attard-Frost, an assistant professor at the University of Alberta who studies AI governance. \u201cAI companies in Canada have been given significant latitude to decide on their own safety standards.\u201d<\/p>\n<p class=\"c-article-body__text mv-16 l-inset text-pb-8\" data-sophi-feature=\"interstitial\"><a href=\"https:\/\/www.theglobeandmail.com\/opinion\/article-tumbler-ridge-shooting-help-prevent-future-tragedies\/\" rel=\"nofollow noopener\" target=\"_blank\">Opinion: Can understanding the Tumbler Ridge shooting help prevent future tragedies?<\/a><\/p>\n<p class=\"c-article-body__text mv-16 l-inset text-pb-8\" data-sophi-feature=\"interstitial\"><a href=\"https:\/\/www.theglobeandmail.com\/canada\/article-the-eight-victims-of-the-tumbler-ridge-shooting\/\" rel=\"nofollow noopener\" target=\"_blank\">Remembering the Tumbler Ridge shooting victims: Eight lives lost<\/a><\/p>\n<p class=\"c-article-body__text text-pr-5\">The incident has also highlighted the power and responsibility wielded by AI companies. ChatGPT has roughly <a href=\"https:\/\/openai.com\/index\/the-state-of-enterprise-ai-2025-report\/\" rel=\"nofollow noopener\" target=\"_blank\">800 million users<\/a>, close to 10 per cent of the world\u2019s population. Some people are sharing intensely personal thoughts and feelings with chatbots, treating them as trusted companions or therapists, when in reality these are products operated by corporations that have little to no duty of care to users. When those conversations turn to harming others, there is no rulebook in Canada for what AI companies should do next.<\/p>\n<p class=\"c-article-body__text text-pr-5\">For some experts, the incident reinforces the need for Canada to introduce legislation for AI companies to protect public safety and guard privacy. B.C. Premier David Eby, for one, <a href=\"https:\/\/www.theglobeandmail.com\/politics\/article-ai-minister-openai-tumbler-ridge-posts-shooter\/\" rel=\"nofollow noopener\" target=\"_blank\">has called for rules<\/a> for when AI companies alert police. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Canada has no overarching AI legislation, and unlike some other jurisdictions, does not have a set of rules that apply specifically to chatbots. In fact, in the first public speech given by federal AI Minister Evan Solomon last year, he said that Canada would avoid <a href=\"https:\/\/www.theglobeandmail.com\/business\/technology\/article-ottawa-will-focus-more-on-economic-benefits-of-ai-less-on-regulation\/\" rel=\"nofollow noopener\" target=\"_blank\">\u201cover-indexing on warnings and regulation<\/a>\u201d about the technology to take advantage of the economic benefits of AI. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cOur approach has always been to make sure that we are building a safe and reliable environment,\u201d Mr. Solomon told reporters Thursday. \u201cBut the urgency has changed.\u201d<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/HNF7JDQ5FVDPFA26UYE5JIRKI4.JPG?auth=0b80a198155b033d9dfc160135acc217e671543ce1d83ca72277b4ad911ad4fb&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"2\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">B.C. Premier David Eby speaks after the province declared a day of mourning at the legislature in Victoria on Feb. 12.CHAD HIPOLITO\/The Canadian Press<\/p>\n<p class=\"c-article-body__text text-pr-5\">The federal government has been looking at updated privacy and online harms legislation, which could touch on AI platforms. Neither bill has been introduced yet, nor has the government indicated whether chatbots will be covered by online harms, as some <a href=\"https:\/\/www.theglobeandmail.com\/politics\/article-regulate-chatbots-ai-generated-content-labels\/\" rel=\"nofollow noopener\" target=\"_blank\">experts have urged.<\/a><\/p>\n<p class=\"c-article-body__text text-pr-5\">Tackling the issue is fraught. Should AI companies define their own procedures for reporting to law enforcement? Or should government? And how will measures be enforced? Any provisions would need to strike a balance between privacy and safety, and take care to set appropriate reporting thresholds. Too low, and police could be showing up at the homes of Canadians over benign conversations. Too high, and some tragedies may not be averted.<\/p>\n<p class=\"c-article-body__text text-pr-5\">For some experts, these questions are coming too late. The <a href=\"https:\/\/www.theglobeandmail.com\/life\/article-grok-ai-x-twitter-elon-musk-artificial-intelligence-sexualized-images\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.theglobeandmail.com\/life\/article-grok-ai-x-twitter-elon-musk-artificial-intelligence-sexualized-images\/\">real-world dangers<\/a> of AI are well known, but regulation in Canada has not kept pace. \u201cWe could be in a much better place had there been some more serious discussions,\u201d said Fenwick McKelvey, associate professor of communication studies at Concordia University. \u201cNone of this was unexpected.\u201d<\/p>\n<p class=\"c-article-body__text text-pr-5\">The desire to leap to regulation after a tragedy like that in Tumbler Ridge is understandable, but AI companies have not been transparent about how they report to police, making it difficult to assess what needs fixing. \u201cIt\u2019s really hard to talk about a regulatory solution when there\u2019s a complete vacuum about what we know,\u201d Prof. McKelvey said. The fact that Mr. Solomon had to hold a meeting with OpenAI on Tuesday to learn about its safety protocols underscores that reality.<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/J42NNW3D7ZDXZF2KUXLX2O5CWY.JPG?auth=f486eec740e44c1426361ac7a0d26835159acb3188e9f457efb6229b8d330488&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"3\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">Residents hug as they place flowers at a memorial for the victims of the mass shooting in Tumbler Ridge, B.C.Christinne Muschi\/The Canadian Press<\/p>\n<p class=\"c-article-body__text text-pr-5\">According to <a href=\"https:\/\/www.wsj.com\/us-news\/law\/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.wsj.com\/us-news\/law\/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62\">The Wall Street Journal<\/a>, Ms. Rootselaar discussed scenarios involving gun violence with ChatGPT over several days last year, and these conversations were flagged by an automated review system. About a dozen employees debated whether to contact law enforcement, but OpenAI leaders decided against it. The company banned her account in June, 2025.<\/p>\n<p class=\"c-article-body__text text-pr-5\"><a href=\"https:\/\/www.theglobeandmail.com\/canada\/article-tumbler-ridge-shooters-chatgpt-messages-were-flagged-months-before\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI has said<\/a> that it refers cases to authorities when a user presents an imminent and credible risk of serious physical harm to others. The conversations did not meet that bar because the company did not identify credible or imminent planning, according to OpenAI.<\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cWere these people equipped to make that kind of judgment call and should they or OpenAI be in that position?\u201d said Katrina Ingram, founder of Ethically Aligned AI, a consultancy in Edmonton. \u201cIn the absence of any other rules or regulations, private companies will set their own policies.\u201d<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/Q5O6CHK4L5H2DL7CSOZPRCER5U.JPG?auth=ff559b827c06ecb849b0e67cdff376c3c035fcdce8858f6b909060455551367e&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"4\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">Minister of Artificial Intelligence Evan Solomon on his way to a caucus meeting in Ottawa earlier this week.Justin Tang\/The Canadian Press<\/p>\n<p class=\"c-article-body__text text-pr-5\"><a href=\"https:\/\/cdn.openai.com\/pdf\/8e938d69-0b67-4994-b9ff-683733ed587e\/openai-letter-minister-solomon.pdf\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/cdn.openai.com\/pdf\/8e938d69-0b67-4994-b9ff-683733ed587e\/openai-letter-minister-solomon.pdf\">In a letter sent<\/a> by OpenAI to Mr. Solomon and other ministers on Thursday, the company offered a little more detail about its procedures. Vice-president of global policy Ann O\u2019Leary wrote that \u201cseveral months ago\u201d OpenAI worked with mental health and law enforcement professionals to refine criteria for when conversations merit a referral to authorities. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cMental health and behavioural experts now help us assess difficult cases,\u201d Ms. O\u2019Leary wrote, adding that OpenAI\u2019s criteria are more flexible to account for the fact that a user might not discuss the target, means or timing of planned violence, but an imminent risk could still be present. \u201cUnder our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,\u201d she wrote.<\/p>\n<p class=\"c-article-body__text mv-16 l-inset text-pb-8\" data-sophi-feature=\"interstitial\"><a href=\"https:\/\/www.theglobeandmail.com\/canada\/article-tumbler-ridge-shooters-chatgpt-messages-were-flagged-months-before\/\" rel=\"nofollow noopener\" target=\"_blank\">Tumbler Ridge shooter\u2019s ChatGPT messages were flagged months before attack<\/a><\/p>\n<p class=\"c-article-body__text text-pr-5\">The letter does not make clear when exactly those changes went into place, and still leaves questions unanswered, such as more detail on the company\u2019s refined risk criteria and how further progress will be monitored and assured. \u201cThis looks like an attempt to preserve the status quo of industry self-regulation through voluntary commitments,\u201d said Prof. Attard-Frost, adding that action taken by one company does not ensure that other industry players will adopt the same standards.<\/p>\n<p class=\"c-article-body__text text-pr-5\">In a statement Friday, Mr. Solomon said he will be meeting with OpenAI chief executive Sam Altman next week and will be seeking more details from the company, including how human review is conducted. \u201cWe have not yet seen a detailed plan for how these commitments will be implemented in practice,\u201d he said. \u201cAll options remain on the table.\u201d He will also meet with other major platforms in the coming weeks.<\/p>\n<p class=\"c-article-body__text text-pr-5\">OpenAI is not the only tech giant with a consumer-facing chatbot. Google and Anthropic did not reply to requests for comment about their own procedures.<\/p>\n<p class=\"c-article-body__text text-pr-5\">A spokesperson for Meta Platforms Inc. declined to comment but provided links to policy stating the company may notify law enforcement about emergency situations, such as risk of death or imminent bodily harm. The spokesperson also sent a video about how Meta stopped a suspected school shooting in the U.S. by reporting social media content.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Chatbots are more private and intimate than social media. Mr. Altman has acknowledged that some people are sharing deeply personal matters with ChatGPT. \u201cYoung people especially use it as a therapist,\u201d he said in a <a href=\"https:\/\/www.youtube.com\/watch?v=aYn8VKW6vXA\" rel=\"nofollow noopener\" target=\"_blank\">podcast interview last year<\/a>. The company was still figuring out the privacy implications, he said, but argued it should be afforded strong protections from disclosure.<\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cIf you go talk to ChatGPT about your most sensitive stuff and there\u2019s a lawsuit, we could be required to produce that,\u201d he said. \u201cThat\u2019s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist.\u201d<\/p>\n<p class=\"c-article-body__text text-pr-5\">While therapist notes can be subpoenaed, Mr. Altman\u2019s desire that the company should be granted any of the same privileges as mental health providers is striking. While people may treat ChatGPT like a therapist, OpenAI is beholden to none of the same standards as mental health professionals in Canada. Moreover, the incentives are skewed. Chatbot providers have a financial stake in keeping conversations flowing, and these interactions can be valuable to help train future AI models and deliver targeted advertising. \u201cThere\u2019s a willingness to think about these conversations as profitable, but not the liability embedded in them,\u201d Prof. McKelvey said.<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/6HRTU2BB4FDNPPNDJATUEC42VI.jpg?auth=1f2f82843b848922ddb506bda61fd139518498b269d27cd8e2b17b76d40f78d8&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"5\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">iStockPhoto \/ Getty Images<\/p>\n<p class=\"c-article-body__text text-pr-5\">Therapists draw on a lot of context to determine whether someone presents a risk of harm to themselves or others, including personal histories and existing diagnoses, and may consult others, such as clinical supervisors. \u201cThere is a necessity for you to be well-trained, and have a really sound system for understanding what is presented in front of you,\u201d said Candice Alder, a psychotherapist in B.C. and teaching fellow at the Center for AI and Digital Policy.<\/p>\n<p class=\"c-article-body__text text-pr-5\">An AI company looking at a chat transcript does not have the same context, making an assessment more difficult. Expressing harmful thoughts does not mean someone will act on them, either. \u201cI can tell you as a therapist the kinds of things that young people say on the internet are not always a reflection of exactly what is going on,\u201d she said.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Because of the personal nature of chatbots and the sensitive data amassed by AI companies, the potential for these applications to cause harm to the public is only growing, experts say. Many professionals that hold influence over individuals, such as law and medicine, have regulatory bodies and standards. \u201cI don\u2019t see why it should be any different for companies that offer a product that is now embedded in the lives of a large part of the population,\u201d said Vincent Denault, assistant professor at the University of Montreal\u2019s School of Criminology.<\/p>\n<p class=\"c-article-body__text mv-16 l-inset text-pb-8\" data-sophi-feature=\"interstitial\"><a href=\"https:\/\/www.theglobeandmail.com\/canada\/article-tumbler-ridge-school-shooter-teenager-psychiatric-care\/\" rel=\"nofollow noopener\" target=\"_blank\">Teenager identified as Tumbler Ridge school shooter had struggled with mental health<\/a><\/p>\n<p class=\"c-article-body__text text-pr-5\">On that front, other jurisdictions are further ahead than Canada. The European Union\u2019s AI Act <a href=\"https:\/\/ai-act-service-desk.ec.europa.eu\/en\/ai-act\/article-55\" rel=\"nofollow noopener\" target=\"_blank\">requires developers<\/a> of general-purpose AI systems to perform safety tests and mitigate risks. <a href=\"https:\/\/www.blackburn.senate.gov\/services\/files\/C43D3B19-391B-4EB6-84C1-0FC37EEBBA4D\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.blackburn.senate.gov\/services\/files\/C43D3B19-391B-4EB6-84C1-0FC37EEBBA4D\">Proposed federal AI regulation<\/a> in the U.S. puts a \u201cduty of care\u201d on developers to prevent and mitigate foreseeable harm to users. It would also require companies to regularly assess how their systems can contribute to psychological harms. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Both <a href=\"https:\/\/www.nysenate.gov\/legislation\/bills\/2025\/A6767\" rel=\"nofollow noopener\" target=\"_blank\">New York<\/a> and <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202520260SB243\" rel=\"nofollow noopener\" target=\"_blank\">California<\/a> have legislation requiring chatbot providers to notify users that they are not talking to a human, and have protocols for preventing suicidal ideation and self-harm.<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/H7TKMKL3EZDLJA35XBHSNHH2GY.JPG?auth=930479edf24da27eea9119710b936ffd22d165b138d7f311e5f98b16f0ee4686&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"6\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">The Canadian flag hangs at half-mast at the legislature in Victoria, B.C., on Feb. 11.CHAD HIPOLITO\/The Canadian Press<\/p>\n<p class=\"c-article-body__text text-pr-5\">Canada is also the only G7 country that has no online harms legislation and <a href=\"https:\/\/www.mediatechdemocracy.com\/s\/OpenAI-and-Tumbler-Ridge-Memo-Taylor-Owen.pdf\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.mediatechdemocracy.com\/s\/OpenAI-and-Tumbler-Ridge-Memo-Taylor-Owen.pdf\">no digital safety regulator.<\/a> The EU\u2019s Digital Services Act <a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2022\/2065\/oj\/eng\" rel=\"nofollow noopener\" target=\"_blank\">requires online platforms<\/a> to report when they become aware of information indicating a threat to life and safety, but not to actively monitor communications. The <a href=\"https:\/\/www.europarl.europa.eu\/doceo\/document\/E-10-2025-004826-ASW_EN.pdf\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.europarl.europa.eu\/doceo\/document\/E-10-2025-004826-ASW_EN.pdf\">European Commission is assessing<\/a> whether ChatGPT is covered by the legislation.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Emily Laidlaw, associate law professor at the University of Calgary, said Canada could draw from the European approach. \u201cThere\u2019s some room for Canada to consider what would be appropriate here to add to law, but it will still always be a baseline,\u201d she said.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Setting that threshold is tricky. When the federal government was working on online harms a few years ago, initial proposals requiring social media platforms to report harmful content to law enforcement alarmed experts. \u201cIt was so broadly framed that the pushback was pretty extreme,\u201d Prof. Laidlaw said.<\/p>\n<p><a style=\"display:block\" href=\"https:\/\/www.theglobeandmail.com\/resizer\/v2\/N6VSSPP3ZZGRJBK6FSVUW73E4Q.JPG?auth=320512f02d15fac39471e7d55ca0045596f6e023865d8f9234c09242abb68c87&amp;width=600&amp;height=400&amp;quality=80&amp;smart=true\" aria-haspopup=\"true\" data-photo-viewer-index=\"7\" rel=\"nofollow noopener\" target=\"_blank\">Open this photo in gallery:<\/a><\/p>\n<p class=\"figcap-text\">A memorial on the steps of the town hall in Tumbler Ridge.Jennfier Gauthier\/Reuters<\/p>\n<p class=\"c-article-body__text text-pr-5\">Indeed, there is a risk of infringing on privacy and civil liberties if company policies or proposed government measures go too far. If tech companies are required to monitor and report chatbot interactions, why stop there? Any form of written communication held by tech companies \u2013 texts, e-mails, searches \u2013 could theoretically provide hints of an impending crime. That kind of regime veers into a surveillance dystopia.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Even if the threshold is sufficiently high, the incentive for companies may be to overreport to reduce their liability and ensure compliance. More cases flagged to the police, even if not entirely credible, could have the unintended consequence of causing harm to the public. \u201cThis might disproportionately impact certain groups of people who get falsely flagged,\u201d Ms. Ingram said. \u201cThat would be something to address in the process and to ensure a redress mechanism.\u201d<\/p>\n<p class=\"c-article-body__text text-pr-5\">Still, the same problem can arise if companies are permitted to develop their own policies. \u201cInevitably, without a mandated standard we end up with inconsistent results,\u201d said Jon Penney, an associate professor at York University who researches AI and the law. <\/p>\n<p class=\"c-article-body__text text-pr-5\">Prof. Penney said that any measure has to be narrowly tailored and specific about which threats need to be reported, have a \u201ccommon sense\u201d standard applied to what constitutes an imminent threat, and codify the factors companies should use in exercising discretion. Transparency is key, too. The law should compel companies to disclose their procedures.<\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cWe cannot simply leave it to companies, who almost surely are weighing not just privacy and public safety, but also corporate, brand, profit, and reputational considerations,\u201d he said.<\/p>\n<p class=\"c-article-body__text text-pr-5\">Earlier this week, Justice Minister Sean Fraser warned of legislative changes should OpenAI not improve its safety protocols. Now that OpenAI has started that process \u2013 its letter said it would continue enhancing procedures, develop a direct point of contact with Canadian law enforcement and better detect users who repeatedly violate its policies \u2013 the next step may be with government.<\/p>\n<p class=\"c-article-body__text text-pr-5\">But the letter also revealed major shortcomings. The fact that OpenAI is vowing to establish a point of contact with Canadian law enforcement suggests it did not have one before. It also failed to detect that Ms. Van Rootselaar had a second ChatGPT account, and only discovered it after her name had been made public. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cIt\u2019s actually proof that their safeguards failed twice, first in deciding not to refer the original account to law enforcement, and then in failing to catch a repeat offender re-entering their platform,\u201d said Helen Hayes, associate director of policy at the Centre for Media, Technology and Democracy. \u201cThe commitments in this letter should be read as a response to systemic failure, not an isolated error.\u201d<\/p>\n<p class=\"c-article-body__text text-pr-5\">While the focus on AI is justified, especially because it is so new, it is worth remembering that existing systems have flaws, too.<\/p>\n<p class=\"c-article-body__text text-pr-5\">In the case of Tumbler Ridge, <a href=\"https:\/\/www.theglobeandmail.com\/canada\/article-tumbler-ridge-school-shooter-teenager-psychiatric-care\/\" rel=\"nofollow noopener\" target=\"_blank\">police had visited the home<\/a> of the shooter multiple times over the past few years owing to mental health issues. Officers seized firearms from her home two years ago, but someone in the family successfully petitioned for their return. <\/p>\n<p class=\"c-article-body__text text-pr-5\">\u201cThe important thing is to look at the whole ecosystem of intervention,\u201d Ms. Alder said. \u201cNot just the technology.\u201d<\/p>\n<p class=\"c-article-body__text text-pr-5\">With a report from Irene Galea <\/p>\n","protected":false},"excerpt":{"rendered":"Open this photo in gallery: iStockPhoto \/ Getty Images When it comes to the dangers posed by chatbots,&hellip;\n","protected":false},"author":2,"featured_media":508471,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,1008,61,5756],"class_list":{"0":"post-508470","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-pleasemod","14":"tag-technology","15":"tag-yesapplenews"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/508470","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=508470"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/508470\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/508471"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=508470"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=508470"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=508470"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}