Governments’ growing comfort with AI is bringing new ethical questions to the public service.
The popular explosion of large language models (LLMs) like ChatGPT is forcing agencies to contend with how to harness the productivity benefits while safeguarding against harm to citizens.
But are these questions new? AI expert Professor Toby Walsh doesn’t think so.
Walsh has spent more than 40 years following developments in AI, and holds or has held positions at dozens of globally recognised research institutions.
Speaking at the Digital Transformation Agency’s AI in Government showcase last week, Walsh said AI raises similar questions to other transformative technologies.
Essential reading for Australia’s public service.
Stay ahead of policy shifts, leadership moves, and the big ideas shaping the public service. Sign up to The Mandarin’s free newsletters.
By continuing, you agree to our Terms & Conditions and Privacy Policy.
He said this offers hints as to what can go right or wrong.
“AI is not magic. It’s just another technology. Another tool,” he said.
“One of the big take-home messages I’ve got for you this morning is that the questions you should ask are the questions that you always ask.
“These are the questions you asked when the internet started, the questions you asked when we industrialised our factories, the questions we’ve asked throughout the course of history.
“There are new harms, and you will read in the newspaper plentiful stories of some of those places where things haven’t worked out.
“There’s new things to worry about because the technology AI will let us do things cheaper, faster, at a greater scale than previously.”
A Hippocratic oath for AI
Walsh pointed to medicine as an area where technological advancements have visible life-or-death consequences.
He said the four pillars of medical ethics, beneficence, non-maleficence, autonomy and justice, offer guidance about how to consider risk and reward in AI.
“Over a couple of hundred years, we thought very carefully about how to use medicine, medical technologies in a responsible way,” he said.
“That’s a very good place to start thinking about the responsible ethical use of AI.”
Beneficence
Beneficence is the idea people should “do good”, or contribute in a net positive way.
Walsh used an example from his recent work to show how this can be applied in practice.
“I’ve been working with a big multinational company. They had a fleet of 800 trucks. We went in there and said, ‘We can optimise the routes of your trucks. I can save 15% of your fuel bill,” he said.
“We were saving them $40 million a year on fuel, which doubled the company’s profits that year. So, they were very happy.
“But I was very happy for the planet because every one of those dollars was diesel that wasn’t going to put CO2 up into the atmosphere.”
Non-malificence
Non-malificence is to “do no harm”. This applies equally to individuals and groups.
Walsh said in the case of the logistics project, he wouldn’t have accepted the gig if it had resulted in mass layoffs.
“When we began the project, I remember saying to the CEO … ‘Please promise me that you won’t tell me fantastic job, Toby, we’ve fired 15% of our drivers.”
“At the end of the job, the CEO said, ‘Look, here’s the number of drivers. It’s the same as when you started, but they’re all happier, and the company is more sustainable.”
Autonomy
Autonomy is about respecting individual rights. In medicine, it is the ethical principle that people should give informed consent for interventions.
Walsh said this principle is important in considering how AI is implemented in public-facing ways.
“When we put artificial intelligence into people’s lives, [we must] make sure that people are aware.
“I was somewhat disturbed by Google’s latest AI assistant that can ring up a hairdresser or ring up a restaurant and book a table for you.
“It umms and ahhs like a human. And it doesn’t say at the beginning of the call … ‘I’m Toby’s AI assistant, I’d like to make a booking for him’. It pretends to be a human.
“That doesn’t seem to me to be very good informed consent.”
Justice
Justice is the principle of fairness, which states that risks and rewards should be evenly spread.
In the context of artificial intelligence, Walsh relates this to things like algorithmic and data biases.
“Data by its very nature is historical, and it reflects the biases of the system in which that data was captured,” he said.
“Maybe we’ve sent more patrols into poor neighbourhoods in the past, and that’s why we found more crime in poor neighbourhoods.
“We have the crime that was reported and prosecuted. There’s a lot of crime that took place that we never saw. It’s not clear how we could capture that statistic.
“This fundamental problem applies to so many places that AI is being applied to. We’re trying to predict something here, where we don’t have the ground truth. We have a proxy.”
The precautionary principle
Walsh proposes that the “seeming unpredictability” of AI warrants a fifth “precautionary” principle.
He said regulators should learn from some of the things that went wrong in previous technological pivot points.
“We are going to have to think carefully about the long-term consequences of things,” he said
“[The precautionary principle] is an idea that was introduced into international and environmental law … where we might not fully understand the causal relationships between the technology and the impact it might be having on the environment.
“Lots of people are starting to use AI bots as therapists or companions. I think we need to be rather careful about the consequences of that.”