The Office of the Information and Privacy Commissioner of Alberta wants the provincial government to create a regulatory framework to govern the use of artificial intelligence as its rapid expansion continues — especially into Alberta’s health-care system.
“Active adoption” of artificial intelligence use in health care is here, according to a report from the OIPC.
The report notes that the OIPC “is of the view that Alberta could benefit from a standalone law to regulate AI in the province that works alongside privacy laws, the latter of which will protect the rights afforded to individual Albertans concerning the collection, use or disclosure of their personal or health information to train AI, and to use AI where this involves processing personal or health information.”
Survey results released by the OIPC in February suggested there is some opposition to the use of personal health information to train AI, as well as concern about the impact of AI on health-care outcomes.
Survey respondents also indicated a desire for a heads-up if AI is used to make decisions about health care.
This Sunday, Cross Country Checkup is asking: How is AI changing your life? What questions do you have? Leave your comment here and we may read it or call you back for our show later this afternoon!
In a statement, the spokesperson for Alberta’s Ministry of Technology and Innovation told CBC that it is exploring its options. The statement said the ministry believes AI is an “incredibly powerful tool with transformative potential,” but also comes with potential risks of misuse.
“Given this, we are exploring effective regulatory approaches that safeguard Albertans’ privacy, ensure safety, and uphold public trust, while also supporting innovation,” Jonathan Gauthier said in an email.
The risks
Blair Attard-Frost, assistant professor of political science at the University of Alberta, said the OIPC’s report is a good first step.
“It’s good to have some kind of … acknowledgement that this is something that we need to create standards, principles around,” she said in an interview on Wednesday.
“And then we can, in the coming years, work towards making those enforceable, putting those within legal frameworks, creating regulations around it that are more legally enforceable and more binding.”
Attard-Frost said for health care, the concern is perhaps less about the government using AI in that field, and more about health-care providers using AI in their operations — and the government maintaining strong regulatory oversight over things like confidentiality and protection of sensitive health data.
“So I think there is an expectation that the government will continue that precedent of having strong regulation within the health-care sector when it pertains to AI,” said Attard-Frost, who is also a fellow at the Alberta Machine Intelligence Institute.
The OIPC’s report on AI regulation says that personal information is often a core component of AI development and deployment.
“Many of the harms associated with the use of AI stem from the (mis)use of personal information,” the report reads.
“The OIPC is already reviewing privacy impact assessments concerning the use of AI by entities subject to our privacy laws.”
The use of automated diagnostic tools, or models that make decisions about resource allocation in hospitals are examples of how people might encounter the employment of AI in health care.
Attard-Frost said people would likely want to see those tools have stringent design standards, as well as oversight and accountability.
Chris Stinner, assistant commissioner for strategic initiatives and information management with the OIPC, said that scribe tools are becoming increasingly popular in health care as well.
The solution lies in toeing the line between benefit and risk, he said.
“The last thing everybody wants, and I think we can agree on that, is for Alberta to become a poster child of a botched AI adoption or …[for] Alberta to be added to the list of things gone horribly wrong when it comes to AI,” Stinner said.
Despite other privacy and health information legislation that exists in Alberta, it’s important to have legislation that specifically pertains to AI, Attard-Frost said.
If you leave decisions to AI with no oversight, the potential speed and scale at which things can go wrong is “qualitatively different” than the risks even 10 years ago, she said.
Training a large-scale, enterprise-level AI system requires a mountain of data: think mostly pictures and text. In health care, think pictures of people’s faces, or their medical conditions that present physically.
Stinner said anyone, including those working in the public sector, should be careful about what data is used to train an AI system.
“Right now, we’re in a situation where there are no set standards as to how the data could be anonymized or de-identified, or how synthetic data could be derived from actual personal information or health information,” he said.
“So one of the things that is sorely needed is agreed-upon standards and principles that ensure reliability.”
The rewards
While there are considerable risks concerning privacy and the training of AI in Alberta, Attard-Frost said on the other side of that is a potential boon for the health-care system in the province.
“You could be able to diagnose diseases more accurately,” she said. “You could be able to expand access to health-care services.”
Attard-Frost said that diagnostic services and prescription filling for lower-risk illnesses could be possible.
“I know a lot of what we’ve been talking about seems like there’s all these risks, right? But the reason that we want to try to mitigate the risk is to be able to potentially benefit from increased access to service, increased efficiency of services, increased quality of life, health outcomes.”
Stinner agrees.
“We recognize and appreciate the promises of AI, the promises both in the productivity gains for the health-care sector, efficiency, accuracy and then the positive impact this could have on the Alberta patient,” he said.
What’s next?
Federally, there is still much uncharted territory in the field of AI. Evan Solomon is Canada’s first minister of AI and digital innovation. Before he took on that role, a proposed Artificial Intelligence and Data Act (AIDA) died on the order paper ahead of the federal election in the spring.
“There hasn’t been too many more specifics from the new government about what their regulatory approach is going to be,” Attard-Frost said.
“I think there’s an opportunity for provincial governments to step in, kind of fill that regulatory gap in the meantime and start thinking about consulting the public — creating regulation that serves their provinces in the absence of really strong federal direction on AI regulation.”
Gauthier said in his statement that Alberta’s Ministry of Technology is looking forward to working with the OIPC as it moves ahead and monitors regulatory developments across the country.