The Canada Border Services Agency (CBSA) is looking to get a little help — from artificial intelligence.
What’s called the Traveller Compliance Indicator is intended to give officers a real-time summary of data.
The agency says the technology will help identify compliant travellers faster and free up officers to focus on unknown and higher-risk travellers.
“[It’s] an indicator, and does not replace officer judgment or automatically determine outcomes,” the CBSA said in an emailed statement to CBC News.
“The actual decision on whether to refer a traveller for a secondary examination rests with the border services officer, whose specialized training, expertise, and knowledge allow them to always be on the lookout for potential threats.”
The CBSA says it launched a pilot project at six land ports of entry in 2023. As first reported by the Toronto Star, CBSA says it anticipates implementing the technology at all land ports of entry by the end of 2027 — with air and marine ports to follow at a later date.
Ebrahim Bagheri is a professor at the University of Toronto with a focus toward AI. (Radio-Canada)
Ebrahim Bagheri is a University of Toronto professor who specializes in the responsible development of artificial intelligence.
He says the border agency is trying to expedite calculating the risk of who’s in line waiting to cross into the country to determine what resources or time should be allocated to each case.
“The purpose of this is obviously cutting down costs, expediting processes, saving resources and what not,” he told CBC Radio’s Windsor Morning.
Bagheri says the risk measure is designed to streamline travellers who are considered low risk versus putting more resources toward people who are considered to have a higher risk.
But with that comes the potential for problems.
“The major risk is what we’ve observed in other similar systems, which is bias against certain subpopulations.”
A Canada Border Services Agency officer investigates a vehicle crossing at the Niagara Falls International Rainbow Bridge in Niagara Falls, Ont. (Aaron Lynett/The Canadian Press)
He cites the COMPAS tool used in the U.S. criminal courts for predicting whether defendants would reoffend or not.
“We now know based on statistical studies that there’s shocking disparities where Black defendants were nearly twice as likely as whites to be misclassified as high risk based on the AI system.”
According to Bagheri, these types of AI systems are mostly trained based on historical data, however, structural biases are hidden.
“Once you train this system based on such historical data, you will end up with systems that would expose biases one way or the other. The AI system will work as good as the data you provided.”
He says the CBSA needs to decide whether it has officers spend time with human travellers trying to cross the border to make unbiased and careful decisions, or have AI work on other aspects within the organization.
“If you think about areas where AI can help, we should typically think about how can I make the work that we are doing more efficient while fair. How can we inventively think about empowering our existing employees to do things that weren’t possible to do before, either because we didn’t have the technology or we did [and] we just didn’t have the time to spend on those things, and so on,” added Bagheri.
Union skepticism
The national president of the Customs and Immigration Union (CIU) says they fear new AI technology isn’t being directly used to assist their officers.
The CIU represents about 12,000 workers — more than half of whom are front like workers, according to Mark Weber.
“Usually you think of bread-and-butter union issues as being things like … salary, benefits, all those things,” he said.
“What we spend a lot of time bringing up with our employer is letting our members do their job right. We see technology really taking over an agency that has no interest in interdiction.”
The union’s collective bargaining agreement with the federal law enforcement agency expires next summer.
People cross the U.S.-Canadian border from B.C. to Blaine, Was. (David Ryder/Reuters)
Weber says his membership is concerned about the agency pulling back on human-to-human interactions.
“You get good at … picking up on indicators, you learn what’s the difference. I think a lot of what we do is based on interviews. It’s based on speaking with travellers. It’s based on reading body language and reaction. I think AI … really is not able to do any of that.”
According to Weber, no one with “ill purposes” is going to voluntarily speak to an officer.
“No one’s ever going to self-declare at a kiosk that they’re smuggling. It’s never happened.”