The rise of artificial intelligence is forcing unions to adapt bargaining strategies to protect workers’ rights.GETTY IMAGES
Artificial intelligence is reshaping the world of work. From shrinking work forces to stagnating wages, the fingerprints of automation and AI are increasingly visible in Canadian workplaces — and unions are taking notice.
According to a 2023 report from PwC, McKinsey, and the World Economic Forum, AI is expected to “fundamentally transform” the global work force by 2050, with up to 60 per cent of current jobs expected to require significant adaptations due to the technology. A recent McKinsey report found that 30 per cent of current U.S. jobs could be automated by 2030, while Goldman Sachs predicts that as many as half of all jobs could be fully automated by 2045, a change driven by generative AI and robotics.
“We’re definitely starting to see AI tech being implemented in different ways, depending on the sector,” says Sarah Ryan, a senior research officer at the Canadian Union of Public Employees (CUPE), who has focused on AI in the workforce since 2024. CUPE members have seen AI enter their work space in a number of ways: AI closed captioning at TV stations, hospital wayfinders replaced by an AI system, and wearable geolocation technology that monitors a worker’s performances.
While these changes may seem incremental, Ms. Ryan warns that AI is being introduced into workplaces without the laws and regulations to protect workers or the public.
“When we’re looking at labour and employment laws, data protection and privacy laws, health and safety; our laws haven’t been updated for the digital age. We’re operating kind of in the Wild West to a certain degree with AI development.”
Employment lawyer Amy R. Tibble, of Toronto-based firm Hicks Morley, disagrees with that sentiment that there are no regulations in place. While she says it’s difficult to comment without particulars regarding the AI being referred to, “legislation is in place in Ontario that governs electronic monitoring of employees and use of AI in recruitment.” Starting January 1, 2026, employers in Ontario with more than 25 employees are required to disclose if any publicly available job posting uses artificial intelligence to screen, assess or select applicants.
For Sean O’Reilly, president of The Professional Institute of the Public Service of Canada (PIPSC), the largest union in Canada representing scientists and professionals employed at the federal, provincial and territorial levels of government, the increased application of AI in government function is inevitable — but it must be properly managed.
“We welcome the adoption of AI in the government with the caveat that AI should be there to leverage the work, to help the public servants do their job,” he says.
The fear, Mr. O’Reilly adds, is that employers will view AI as a replacement for employees rather than a tool to support them. It’s a fear shared across industries and unions.
While Ms. Ryan says CUPE isn’t at the stage where large numbers of workers are losing their jobs, she cautions that signs of further job loss are emerging.
“We are starting to hear about the impact [of AI] in terms of job loss,” she says. “Sometimes, that’s workers losing their jobs and sometimes it’s employers that are just not filling vacancies when there is job loss or restructuring.”
So far, Ms. Ryan says, they’ve seen AI implementation largely applied to HR functions, including the hiring process, promotions, monitoring, disciplining and even termination of workers.
Of equal concern is surveillance and bias in data collection, especially in hiring and firing. As both Mr. O’Reilly and Ms. Ryan note, AI systems are trained on large quantities of data, which can include holes or explicitly biased or discriminatory information. Ryan points to the 2018 example of the Amazon hiring tool; a recruiting engine that Reuters reported was found to favour men in the hiring process, allegedly penalizing resumes that included the word “women’s.”
For Ms. Tibble, ensuring that their clients – employers – are equally as informed and understand the AI program they’re using and how it fits into their goals, be it company growth or growth and development of their workforce, is of the utmost importance. This includes knowing where data is originating from and being saved and which algorithms are being used.
“If you’re using an AI product that is drawing its experience in creating its algorithms from an American base, that’s not going to translate well into the Canadian economy and the Canadian workforce,” Ms. Tibble says. “We have different laws.”
While she notes that a large number of their clients who’ve implemented AI into workforces to varying degrees have had overwhelmingly positive experiences with it, how AI is implemented is still in the discovery phase.
“Right now organizations are still finding their way as to what AI products and programs are out there and what exactly, if any, AI products or programs they want to use in the workplace.”
For unions, risks around bias in hiring and firing make advocacy more urgent.
“The way we help our members is, first and foremost, really advocating for better AI use across all sectors,” Mr. O’Reilly says. “We want to make sure protections are in place, so we are constantly working with the employers to make sure those protections are put in place and hopefully enshrined in collective agreements.”
But bargaining is difficult when technology is advancing faster than contract timelines.
As David Mastin, president of the Elementary Teacher’s Federation of Ontario (ETFO), which represents elementary teachers and support workers in the province, notes that teachers have internally agreed to the language and terminology they’ll use when bargaining, but worry it won’t hold up.
“We sometimes bargain for four-year agreements,” Mr. Mastin says. “AI is evolving so rapidly that four years from now, if we have language that doesn’t either protect jobs or define certain concepts, we could find ourselves in a whole heap of trouble down the road.”
For now, unions are focused on education.
“What we’re trying to do is provide education and resources to our members on understanding AI,” Ms. Ryan says. “It can seem super complicated and overwhelming. [We want to increase] the level of knowledge and encourage workers to be aware of what’s happening in terms of technology in their workplace.” That includes encouraging members to ask employers about new systems: What data is being collected? How is it being used? And is it covered in their collective agreements?
“It should be the AI with the human working hand-in-hand to make decisions,” Mr. O’Reilly says. “AI should help humans make the decision, but at the end of the day, it is a human being making those decisions and not AI.”