We use this editor’s blog to explain our journalism and what’s happening at CBC News. You can find more blogs here.

“Choose News, Not Noise.”

That’s the catchphrase for a new CBC campaign aimed at reminding Canadians how our journalism can provide a safe harbour from the roiling seas of disinformation, fake news and AI-generated content sloshing through our feeds.

There’s plenty of research to suggest Canadians are feeling overwhelmed by this flood of questionable content. 

A new survey from the Canadian Journalism Foundation (CJF) found 88 per cent of respondents expressed concern about AI-generated deception in the news. Nearly half the people surveyed said they encounter misleading or false information daily or several times daily. 

Our goal is to position CBC News as the antidote to this growing problem, a place where you are guaranteed to find fact-based journalism that is always produced, verified and overseen by humans. A place where we are publicly accountable for our work through an independent Ombud and where we transparently own up to mistakes. (We are human, after all.) 

It has been 2½ years since we issued our first set of guidelines on the use of artificial intelligence at CBC News. As the technology advances and new tools emerge, we feel it’s time to update those guidelines, providing our teams with a few more current examples of how we can responsibly use AI to the benefit of our journalism and how we can avoid potential pitfalls that may erode public trust. 

Several screens in a media control room.To maintain the trust of our CBC News audience, all content, whether assisted by AI or not, must meet our rigorous standards for verification and accuracy. (Arlyn McAdorey/Reuters)

We want to share with you our new internal guidance for staff in the interest of full transparency. And perhaps these updated guidelines will be of interest to others grappling with how to use this technology in the newsroom, workplace, schools or elsewhere.  

The fundamentals of our guidelines remain the same as before: human oversight is mandatory; AI is the tool, never the creator; final editorial judgment, fact-checking and accountability always rest with our journalists; our journalistic standards must be met at all times; we will be open and transparent about any material use of AI in our journalism, and you, the public we serve, will never have to question whether something we’ve produced is real or AI-generated. 

At CBC News, we distinguish between generative AI, where content is produced primarily or entirely by an AI tool with minimal human intervention, and content that is AI-assisted, a collaborative process where the AI is used as a tool to enhance, accelerate or facilitate human journalism. 

Using generative AI to create original, public-facing content (text, video, audio or images) can be fraught and in our view poses the greatest risk to public trust if not managed carefully. We will be extremely conservative when using it, only with strict human oversight and transparency. As stated in the original guidelines, “No surprises: audiences will be made aware of any AI-generated content before they listen, view or read it.”

We do, however, see opportunities with AI in an assistive capacity. For example, the tool can be used to quickly interrogate vast amounts of data. Recently, we analyzed a number of town council meetings to identify contentious stories that had not yet been told by community media (subject, of course, to our own verification). 

Here, for the record, are the guidelines we shared with staff.

Introduction

These guidelines establish a framework for the responsible and ethical use of artificial intelligence (AI) at CBC News. Our core philosophy is that AI should empower our staff and augment our journalism. By integrating AI tools thoughtfully, we aim to enhance our productivity and create more time for demanding journalistic activities, while ultimately improving the experience for our audience. Adherence to these guidelines is mandatory for all staff who work under News. CBC News is committed to further expanding its comprehensive AI literacy and continuous training for all journalists.

As AI is a rapidly developing field, these guidelines will be reviewed, updated and communicated on an ongoing basis.

Core principles and goals

Our use of AI is governed by foundational principles and strategic goals that align with our Journalistic Standards and Practices

PrinciplesMandatory human oversight: A journalist must always be involved in the editorial process. AI is a tool; it is not the creator. Final editorial judgment, fact-checking and accountability rest with our staff.Accuracy and trust: To maintain the trust of our audience, all content, whether assisted by AI or not, must meet our rigorous standards for verification and accuracy. This is achieved through a commitment to our journalistic standards.Transparency: We will be open with our colleagues and our audience about how we use AI in our work, especially when it materially affects the content.GoalsIncrease productivity and save time (improve employee experience): Responsibly automate and streamline routine tasks to free up staff involved with producing journalism for higher value work. Improve the audience experience: Responsibly use AI to deliver content, for example in more accessible formats across various platforms.Categories of AI use

It is important to distinguish between the two primary ways AI can be involved in content creation. 

At CBC News, our focus is on AI-assisted work.

AI-assisted: This is a collaborative process where a journalist uses an AI tool to enhance, sharpen or accelerate their work. The human remains the primary creator and maintains full control over the creative and editorial direction. The AI acts as a collaborator. AI-generated: This is content produced primarily or entirely by an AI tool with minimal human intervention. As we committed in 2023: “We will not use or present AI-generated content to audiences without full disclosure. No surprises: audiences will be made aware of any AI-generated content before they listen, view or read it.”The exterior of a building, with the CBC logo on it.The usual editorial and vetting processes must apply to all scenarios when AI is used in the production of news by CBC journalists. (Evan Mitsui/CBC)Permitted creative assistance functions

With the core principles and goals listed above in mind, AI tools are approved for specific functions that assist workflows and enhance efficiency. The usual editorial and vetting processes must apply to all scenarios. The goal is to preserve the unique value of human journalistic judgment and creativity.

Approved uses fall into several categories:

1. Research and story development

Brainstorming and outlining: Generating suggestions for story ideas and structures.Research and data analysis: Using AI to find information, identify trends in data and gather background context.

2.  Assistance and review

Headline and question suggestions: Generating a range of options for a person to consider and refine. Audience engagement suggestions: Generating a range of options for consideration and refinement, such as social media text, show descriptions, titles, chyron text, radio/TV bills, search engine optimization (SEO) and push alerts.Summaries: Creating concise versions of internal and audience-facing articles or transcripts that are always vetted by a human.Feedback: Using tools for grammar checks, style consistency and general story feedback.Translation: Performing initial translations that must be verified by a fluent human speaker for nuance and accuracy.

A note on approved experiments: To innovate responsibly, the CBC News AI Steering Committee may approve limited, short-term experiments that explore new AI tools and use cases. These projects will have specific oversight and are designed to help us learn. Participants will be clearly defined and the results will be used to inform future versions of these guidelines.

A note on content transformation: In specific approved instances where AI is used to transform content, such as text-to-speech or closed captioning, a human is not required to be in the loop prior to publication. 

Accountability and responsibility

Our core principle is that journalists, not AI, are responsible for our journalism. This means:

We are accountable for our output: The final work is ours. We are responsible for its accuracy and integrity, regardless of the tools used.Be able to explain your process: You must be prepared to remember and explain how you used AI tools if these questions are asked during the editorial vetting process. Maintain transparency: We will be open and honest with our colleagues about how we utilize AI in our work.Usage guidelines for news teams

Mandatory practices:

Employees must use corporate AI accounts approved for internal use by CBC/Radio-Canada. Do not use personal AI tools to prevent sensitive data leaks and content being used as training data.Drafts of content are permitted ONLY in the corporate AI accounts or in approved internal CBC applications. Employees must verify and cross-reference all AI outputs, with a strong focus on fact-checking.

Prohibited uses:

News division staff must not use AI to write articles or scripts. News division staff must not use image and video generators to create content for public-facing use. News division staff must not use generative AI features in photo or video software.Audience disclosure

Transparency is key to maintaining audience trust. However, not every use of AI requires a disclosure.

When to disclose

Disclosure is mandatory when generative AI’s contribution affects the content in a materially significant way or when the content would not have been possible without the use of AI.

We will not use or present AI-generated content to audiences without full disclosure. No surprises: audiences will be made aware of any AI-generated content before they listen, view or read it.”

Discussion is encouraged when deciding whether to include a disclosure. If you require clarity on whether a disclosure is required, speak with your respective leadership teams.

Examples where disclosure is recommended: Analyzing massive datasets that a human could not do.Automated text-to-speech. Automated closed captions.

The key question to ask is: Is there any risk that the audience might be misled if we do not disclose the use of AI? If the answer is yes, a disclosure is required.

Include these details when a disclosure is necessary:

What the AI tool did.Why the journalist used AI, ideally explaining how it benefited or improved news coverage.How humans were or were not involved in the process and/or reviewed the content before publication. An explanation of how the content still meets the newsroom’s ethical and accuracy standards. Link to the newsroom’s standards.

Example disclosure: 

In this story, we used (AI/tool/description of tool) to help us (what AI/the tool did or helped you do). When using (AI/tool), we (fact-checked, had a human check, made sure it met our journalistic standards). Using this allowed us to (do more of X, go more in depth, provide content on more platforms, etc.).

When disclosure is not necessary

You do not need to disclose the use of AI for routine, assistive tasks that do not substantively shape the final editorial product. Examples include but are not limited to:

Using a generative AI tool for background research.Using generative AI for brainstorming. Using standard spellchecking, general story feedback or grammar-checking software.Using an AI tool for audio restoration/repair or colour correction.