AI can only ever be used with a human in the loop and only in ways that assist journalists, not to replace them.Ryan Remiorz/The Canadian Press
It’s been more than two years since The Globe and Mail published its first-ever guidelines on the journalistic use of artificial intelligence. The memo sent to the newsroom on April 20, 2023, was shared on The Globe’s website June 14, 2023, and two days later an e-mail from editor-in-chief David Walmsley directed subscribers to it and invited them to contact me with questions or concerns.
That first memo was about 800 words long. The updated guidance, which was shared internally with the newsroom in October, runs more than 2,000 words. If you were to guess that the rapid progress and rampant spread of AI technology have necessitated a lengthier and more detailed roadmap for the newsroom, you’d be right.
Because the original memo has been replaced on the website with a brief, high-level note rather than the full document, I want to share some of its key points here.
The fundamentals haven’t changed: AI can only ever be used with a human in the loop and only in ways that assist journalists, not to replace them. And when permitted uses of AI have contributed to The Globe’s journalism, it must be clearly labelled, with a description of how AI has been used.
The updated guidance touches on the ways AI and machine learning can be used outside of the newsroom, to help ensure The Globe’s journalism is seen and read – for example by personalizing the homepage you see on The Globe’s website, or “noticing” the kinds of stories you like to read after work and suggesting more of those when you’re on the site at that time of day. (One of the best explanations of machine learning that I’ve found is from the website of MIT’s Sloan School of Management: “Machine learning is a subfield of artificial intelligence that gives computers the ability to learn without explicitly being programmed.”)
The guidance also cautions that “generative AI technology has some fundamental flaws that make it unsuitable for core writing and editing work.” For one thing, generative AI is only as good as the information that was used to train it, so it isn’t a reliable research tool.
AI output has been found to contain race and gender biases, hallucinations and factual errors. It can also be sycophantic – that is, it tells users what they want to hear. The newsroom guidance document points to this post by Open AI, which admits they “missed” this problem in an update to a version of their chatbot, ChatGPT.
The bot, they said, “aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended. Beyond just being uncomfortable or unsettling, this kind of behaviour can raise safety concerns – including around issues like mental health, emotional over-reliance or risky behaviour.”
The new guidance also urges caution around the use of seemingly benign AI tools such as voice-to-text transcription and even copy-checking functions: “Even seemingly innocuous requests like cleaning up typos and grammar can introduce errors, and summarization can combine ideas in subtle ways that alter the meaning of a passage or quote. This risks our reputation and the accuracy of our reporting.” Further, it says, “Some AI tools may require users waive certain rights in their content. Because of this, stories – including unedited drafts or unpublished stories – cannot be put through Al tools outside of use cases cleared by our legal team.”
Since the first version of The Globe’s AI guidelines was published, the international journalism community has learned a great deal about artificial intelligence. Progress has been made. For the second year in a row, in 2025 submissions for the Pulitzer Prize in journalism required disclosure of AI use, a rule that has allowed prize administrators to get a sense of how top journalists are using AI in the research and presentation of their work.
“[AI] technology, when used appropriately, seems to add agility, depth and rigour to projects in ways that were not possible a decade ago,” Marjorie Miller, a long-time senior journalist at the Associated Press and administrator of the Pulitzer Prizes, told Nieman Lab. For example, AI tools can power through huge volumes of data much faster than humans can; they are also great at recognizing patterns that humans wouldn’t – at least, not quickly.
Among the prizes awarded last year was a 2023 New York Times investigation in which journalists used AI for pattern recognition. “The Pulitzer-winning team trained a tool that could identify the craters left behind by 2,000-pound bombs, one of the largest in Israel’s weapons arsenal. The Times used the tool to review satellite imagery and confirm hundreds of those bombs were dropped by the Israeli military in southern Gaza, particularly in areas that had been marked as safe for civilians,” Nieman Lab said.
Some mistakes have been made as well. And in many ways, AI has become much scarier. Remember that scene in 2001: A Space Odyssey in which the sentient computer HAL refuses to open the pod bay doors at his human operator’s urgent command? In June, some 57 years after the release of the film, San Francisco-based development firm Anthropic shared that stress-tested LLMs (large-language models, a type of AI) “resorted to malicious insider behaviors.”
The New York Post was markedly less chill in its decree: “‘Malicious’ AI willing to sacrifice human lives to avoid being shut down, shocking study reveals.”
Even setting aside the Post’s reputation for sensationalist headlines, the finding is discomfiting, to say the least.
And that’s why guardrails – such as the newsroom’s guidance document – are essential.