Sam Hawley: It can help gather intelligence, pick targets and plan bombings. Artificial intelligence is playing an integral part in the war in Iran. But while AI reshapes warfare, how can we be sure it’s not making mistakes? Today, Toby Walsh, the chief scientist at the AI Institute at the Uni of New South Wales, on what’s unfolding on the battlefield and how killer robots could fight wars in the future. I’m Sam Hawley on Gadigal land in Sydney. This is ABC News Daily.Toby, you’re in Geneva right now, where the United Nations has convened a meeting, essentially about how to stop the rise of killer robots. Like, wow.

Toby Walsh: Yeah, well, it’s very clear that AI is changing not just how we work, but it’s also changing how we fight war. And we see that on our screens. We see what’s happening in Iran. We see what’s happening in Gaza. We see what’s happening in Ukraine. We believe it was also used in the planning of the military exercise that happened in Venezuela when they seized the president there. And it was also used, as far as we know, to help plan the logistics and war game, what happened at the beginning of the Iranian conflict. It’s changing the way we go about warfare.

Sam Hawley: Yeah, so a good way to really explain this, of course, is to consider what we are seeing right now in this war with Iran. What do we know already in terms of the use of AI?

Toby Walsh: AI is increasingly being used to help make the decisions, decide what the targets are. There were over 1,000 targets picked out in the first day of the conflict. Probably that couldn’t have been done without the use of AI to help discriminate, pick up, pick at the targets and decide where you were going to drop the ordnance. And you can see the military advantages of it, but equally you have to be concerned about the impact it has on the character of war, the way we go about fighting war, the mistakes that happen in war. When you’re trying to make decisions at that speed, then you start to worry that there’s adequate human oversight.

Sam Hawley: OK, so Toby, we know AI is already being used on the battlefield, but there is another battle playing out at the Pentagon. Now, this relates to a company called Anthropic and its AI mode, Claude, which the US government was using. Now, you better explain what Claude is because I know nothing about it.

Toby Walsh: Well, Claude is a company that is based in the United States. Claude is an alternative to Chattopadhyay. Many people actually prefer it to Chattopadhyay. Claude is the military’s favourite language model to use at the moment. Interestingly enough, Anthropic was set up by some open AI refugees, people who broke away from open AI because they didn’t think they were being safe enough, that they were taking enough caution in deploying the technology. And they’ve tried to take perhaps a more principled stance. And so one of the places where they’ve been trying to take a more principled stance is about trying to say to the US military who are using the tool for military purposes, Anthropic wanted to lay down two red lines. The first was the use of Claude for large scale surveillance of the domestic population in the United States. And the second is for the use in autonomous weapons, AI guided weapon systems. And the Department of War has pushed back vigorously against this. Indeed, it’s now signed an alternative contract with open AI and has blacklisted Anthropic from military contracts in the future.

Dario Amodei, Anthropic CEO : US President Donald Trump has labelled AI firm Anthropic a radical left woke company.

News reports: The US Defence Secretary Pete Hegseth says he’s named AI firm Anthropic a supply chain risk. Posting on social media, Mr Hegseth says, effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic.

News report: President Trump announced that the US government didn’t need or want Anthropic’s technology, calling the company stance a disastrous mistake that put American lives at risk.

Sam Hawley: Well, Toby, Anthropic CEO, Dario Amodei, he told CBS News that his dispute with the Trump administration is a stand for American values.

Dario Amodei, Anthropic CEO : Disagreeing with the government is the most American thing in the world. And we are patriots in everything we have done here. We have stood up for the values of this country. It’s not about any particular person. It’s not about any particular administration. It’s about the principle of standing up for what’s right.

Sam Hawley: What do you make of that?

Toby Walsh: Well, in private, he said some quite rude things. It was an internal memo that was leaked to the press where he was actually quite rude about the Trump administration, the fact that he hadn’t bent the knee to them like other tech companies, open AI, of course, being the one that he was alluding to. That proved to be such a controversy, he’s actually had to go on the record and apologize. And I should say that these two red lines, these are actually not very controversial ideas. It is against the Constitution to surveil domestically the population. And the US military has already voluntarily imposed itself guardrails about having fully autonomous weapon systems itself. So it’s a bit strange in some sense that the Department of War has picked such a fight over this issue and hindered themselves because it’s clear that that was their preferred model to use. But equally, this is an administration that does like picking fights and being a bit of a bully. In this case, Anthropic has stood up to the bully, which is, I suspect, the right thing to do if you want to walk out with your integrity intact.

Sam Hawley: OK, so the US government and Donald Trump, they’ve given Anthropic the flick. But, you know, Toby, just give me a sense anyway of how it actually worked. What do you do? You just type in, help me with a mission or help me find a target?

Toby Walsh: Well, the great thing about these tools is they’re wonderful for summarising and synthesising information together. And you want to say, well, come up with some targets, identify where there might be military leadership, identify what might be the weak spots in the Iranian defence, where all the missile batteries and so on. Sift through all the information and come up with a ranked list of where we should be sending the targets. And it’s very good at doing those sorts of things.

Sam Hawley: Yeah, so it can speed up that process, identify the targets. It can actually also help get this legal approval that you might need before launching a strike. I’ve seen it reported that it can do this all at the speed of thought, but it’s not human. It’s not a human oversight. So surely there has to be someone sitting there making sure it’s actually coming up with the right answers.

Toby Walsh: Well, there will be the military assistants there who will be looking at the outputs. But the problem is when we’re looking at at the speed with which they’re producing targets, we’re talking about thousands of targets that were picked for the first day of the conflict, whether there’s adequate human oversight, whether there’s actually time to consider. And then also the tools, they’re not very good at explaining themselves. And so if you ask, well, why did you say this particular building was a good target? What evidence are you basing your decision on? It’s actually very hard to get a correct answer out of these AI tools that you can be confident that it’s actually the real reason. It really is this particular intelligence report, this particular information that it’s found that is actually leading to that claim.

Sam Hawley: As you said, the Trump administration is no longer using Claude. There’s been a big bust up, if you like, and has switched now to OpenAI, which of course is headed by Sam Altman. Now, is there guardrails in place there or not?

Toby Walsh: Well, OpenAI did claim that they got these two guardrails, domestic surveillance and autonomous weapons. But legal experts suggest that they’ve been watered down. It’s been very strange that they banned Anthropic and then turned around and signed exactly the contract that Anthropic wanted with OpenAI. So it seems doubtful. And indeed, it’s proven to be a bit of a poison chalice for OpenAI because there’s been a very public campaign to desubscribe from OpenAI. It’s been a bit of a PR disaster. And Anthropic has jumped to the number one in the download charts, overtaking OpenAI for the first time. So it’s proven to be actually perhaps, despite the fact they lost this very large contract, perhaps to be quite a good bit of publicity for Anthropic in their race to catch up with OpenAI in the AI race.

Sam Hawley: Hmm, all right. Well, whichever company, whichever model, I suppose, Toby, it is clear that AI will play a major role in militaries and wars into the future. It already is. That’s pretty clear. But, you know, as you’ve alluded to, it does come with huge risks, right? I mean, we’re talking about life and death decisions normally made by a military commander. It’s been made by a machine. That just sounds so risky.

Toby Walsh: It does. There’s moral, legal and ethical issues that we have to think about as we change the character, or we change the way that we fight war, we hand over life or death decisions to machines. That does take us to a difficult place. And indeed, I’ve been in the United Nations in Geneva last week, helping the conversation there that’s been going on now for over 10 years, thinking about, well, what are the appropriate guardrails that we need to put in place? How do we ensure that warfare is still conducted according to the conventions, according to international humanitarian law? Because we don’t allow war to be all out. Whatever happens, we do actually have some various rules under which war is conducted.

Sam Hawley: But even with the regulations, I suppose the argument can be made that when you have a world with someone like Vladimir Putin in it, and for that matter, Donald Trump, who don’t actually obey international laws, it becomes even more dangerous to just pass this decision-making over to a machine, over to a computer.

Toby Walsh: Yes, we are living in difficult times where there isn’t a lot of convention and following the rules-based order, as it’s called. We will be on the receiving end of these technologies if we’re not careful. And it won’t just be bad states like Russia or North Korea who are doing this. It will also be terrorist organisations and non-state actors who will be using these technologies. So it is in our interest to see if we can actually get some collective agreement. And despite the pessimism in your question, there are boundaries that we put in place. We have limited to a large extent, not completely, but to a large extent, the use of chemical weapons, as an example. We’ve limited the use of biological weapons, nuclear weapons, cluster munitions, blinding lasers. Actually, when you list it out, there’s quite a host of technologies that we have actually put some limits on and that are largely followed by the international community. And I think that’s what we could hope for here, which is that we’re not going to keep the technology out of the hands of the military, but we can perhaps agree some guardrails that will ensure that the worst excesses of the technology are not seen. And also, we fight war according to the dictates of the public conscience. It’s good to remember, you know, why is it that we don’t use chemical weapons? It was because it became very clear after the horrors of the First World War that the public were outraged by what was being done. And that has provided the conviction and strength for diplomats and politicians to actually do something. We could hope for the same here, because it’s easy to describe, I think, where we would end up if we don’t put in any limits. Swarms of drones attacking civilian populations, attacking women and children. And I don’t think anyone anywhere wants us to end up in that sort of world.

Sam Hawley: Okay, so we certainly need some limits put in place, but Toby, it sounds like from everything that you’re saying, that nations will really need to embrace AI in their militaries. It doesn’t sound like we have much of a choice at this point.

Toby Walsh: No, there isn’t. And indeed, Australia needs to embrace that. I’ve struggled to understand why we’re spending so much money on manned submarines, when it’s clear the future of underwater warfare is going to be with uncrewed vessels, just as the future of aerial warfare, the future of land warfare is increasingly with uncrewed vessels in those spheres of battle. So it’s going to be lots of cheap, much more disposable, much more autonomous weapon systems that are going to be used by our military and are going to be the sorts of weapon systems that we’re going to have to defend ourselves against.

Sam Hawley: So what do you think? Should we all be a little scared of this, Toby? Or is it just a shift in military capability, just like we’ve seen in the past?

Toby Walsh: I think it’s one of the most radical shifts that we’ve seen. Others have called it the third revolution in warfare. The first was the invention of gunpowder, the second the invention of nuclear weapons. I think it’s probably right to suggest this is going to be another step change. And you see it’s changed completely the character of the conflict, for example, in Ukraine. We’ve got Ukraine, which is a tenth of the size of Russia, has got a tenth of the economic might, has been able to hold back the aggressor by embracing these sorts of technologies. The other point to remember is that it’s changing the strategic balance of the world. It used to be your military might was determined by, could you afford to build the aircraft carriers and the F-35 fighters and the nuclear-powered submarines? That determined your ability to project force. But increasingly, your ability to project force with these cheap platforms that are increasingly intelligent is not determined by your economic might. Iran itself was developing a very sophisticated capability in drones. And indeed, without those drones, it would be very powerless to do anything in response to the attack that’s been happening against it.

Sam Hawley: Toby Walsh is the chief scientist at the AI institute at the Uni of New South Wales. This episode was produced by Sydney Pead. Audio production by Anna John. Our supervising producer is David Coady. I’m Sam Hawley. And just a reminder that as podcast apps keep changing, we want you to be following ABC News Daily so you don’t miss an episode. If you haven’t already, please tap the follow or plus button on your podcast app. Thanks for listening.