{"id":532640,"date":"2026-03-19T10:54:21","date_gmt":"2026-03-19T10:54:21","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/532640\/"},"modified":"2026-03-19T10:54:21","modified_gmt":"2026-03-19T10:54:21","slug":"speeding-up-the-kill-chain-pentagon-bombs-thousands-of-targets-in-iran-using-palantir-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/532640\/","title":{"rendered":"Speeding Up the \u201cKill Chain\u201d: Pentagon Bombs Thousands of Targets in Iran Using Palantir AI"},"content":{"rendered":"<p>This is a rush transcript. Copy may not be in its final form.<\/p>\n<p>AMY GOODMAN: As the U.S. and Israeli war extends into its 19th day, we turn now to look at how the U.S. is using artificial intelligence to identify and prioritize targets. The system, known as Project Maven, was created by Palantir, and it incorporates the AI model Claude, built by Anthropic. The Pentagon is investigating if the AI system played a role in the U.S. strike on the Iranian girls\u2019 school that killed over 170 people, mostly girls.<\/p>\n<p>This is CENTCOM Commander Admiral Brad Cooper talking about the use of AI in Iran.<\/p>\n<p>ADM. BRAD COOPER: Our war fighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react. Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours, and sometimes even days, into seconds.<\/p>\n<p>AMY GOODMAN: Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon. The Pentagon also reportedly used the AI tools during the recent military attack on Venezuela when U.S. Special Forces abducted the Venezuelan President Nicol\u00e1s Maduro and his wife, Cilia Flores.<\/p>\n<p>This comes as a major rift has emerged between Anthropic and the Pentagon after Anthropic moved to restrict the use of its technology for mass surveillance of Americans and for fully autonomous weapons. In late February, President Trump ordered federal agencies to stop using Anthropic products. Defense Secretary Pete Hegseth declared the firm a supply chain risk, effectively cutting it off from government contracts and related work. It marked the first time the Pentagon has designated a U.S. company as a supply chain risk, prompting Anthropic to sue. On Tuesday, CNN reported that nearly 150 retired federal and state judges have filed an amicus brief supporting Anthropic in its lawsuit against the Trump administration.<\/p>\n<p>We\u2019re joined now by Craig Jones, senior lecturer in political geography at Newcastle University, author of The War Lawyers: The United States, Israel, and Juridical Warfare. He\u2019s the co-author of a new <a href=\"https:\/\/theconversation.com\/iran-war-shows-how-ai-speeds-up-military-kill-chains-278492\" rel=\"nofollow noopener\" target=\"_blank\">article<\/a> in The Conversation headlined \u201cIran war shows how AI speeds up military &#8216;kill chains.&#8217;\u201d<\/p>\n<p>Why don\u2019t we start there, Professor Jones?<\/p>\n<p>CRAIG JONES: Thank you.<\/p>\n<p>Yeah, I mean, the U.S. military, the Israeli military, as your headlines have said, using AI, the kill chain is a bureaucratic mechanism whereby militaries go from trying to designate targets, to identify enemies and military targets, to the process of actually killing them. They\u2019re in the process across the 20th century, early 21st century, of speeding that process up. Military drones have helped greatly with that. And the latest front of that is AI. As Bradley Cooper talked about, you\u2019re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You\u2019re reducing workflows, and you\u2019re automating human-made targeting decisions in ways in which, I think, you know, open up all kinds of problematic legal, ethical and political questions.<\/p>\n<p>AMY GOODMAN: The U.S.-Israel war in Iran is being described as the first AI war. Explain what that means, Craig.<\/p>\n<p>CRAIG JONES: Yeah, I would say it\u2019s not quite the first AI war. As you mentioned, Israel has used AI in Gaza. I think this was the first major use of AI in warfare. I think, actually, the history goes back a little longer, with computer programs partially enabled with AI have been used in the background of military systems for several years now. It was used in a major way in Gaza in the first few months, where we saw tens of thousands of targets put in a target bank opted by military intelligence. Up to 35,000 suspected Hamas combatants found themselves on this list as Israel worked through that to assassinate them, as well as tens of thousands of targets that are ultimately part of the civilian infrastructure. As you\u2019ve said, the U.S. has used it with Maduro, and now Israel and the U.S. are also using these systems in Iran.<\/p>\n<p>The key innovation here is twofold. It is the use of AI for intelligence analysis. Intelligence, military intelligence, is multi-format. There is so much of it. It hoovers up what they call signals intelligence, so mobile phones, internet traffic, SMS, mobile phone tracking, all kinds of things. And the AI systems are being used to spot what militaries call patterns of life \u2014\u00a0you know, who meets with who, who talks with who, what are the nature of the messages, how are they interacting in ways which are deemed suspicious. And the AI systems look for those patterns and make recommendations, which is the second innovation, for targets. They nominate targets to this bank of targets, which then has \u2014 which we can talk about \u2014\u00a0some technical human oversight. And that\u2019s problematic, I think. It\u2019s problematic because that\u2019s a really persuasive technology. It\u2019s nominating hundreds, thousands of targets potentially a day, and it\u2019s working at speeds which are just beyond, you know, the evolution of human cognition in, again, ways that are problematic.<\/p>\n<p>AMY GOODMAN: Can you explain \u2014 I mean, this is being investigated by everyone, including the U.S. government and the Pentagon \u2014\u00a0how Palantir was used, it\u2019s believed, in the first strikes, the first day of the U.S.-Israeli war on Iran, may have been involved in the targeting of a girls\u2019 school in southern Iran using the tools of Palantir and Claude, which is a property of Anthropic?<\/p>\n<p>CRAIG JONES: Yeah, so, this strike on the girls\u2019 school is at the moment the leading kind of civilian casualty incident, in which around, as you\u2019ve said, 170, mainly girls, were killed, innocent civilians. At the start, we should remember some of the history of this. It was denied by the U.S. military. Trump insinuated at one point that it was an Iranian missile. It was later verified that it was indeed a U.S. series of Tomahawk missiles that struck this area. And a U.S. preliminary investigation has now found and confirmed indeed what many people thought, which was that U.S. is responsible.<\/p>\n<p>It looks \u2014\u00a0we\u2019re not yet clear the role of AI in that particular strike. Whether that becomes clear in the coming days and weeks, we\u2019ll have to see. What we do know is that the Claude and Anthropic model by Palantir have been extensively used to do several things, including the intelligence analysis. So we can deduce that that AI system is not yet capable of detecting, or is at least, you know, open to making systemwide errors. It did not identify the school as a school,\u00a0in an extremely problematic way in which, you know, within a couple of days, organizations such as The New York Times are able to verify via satellite imagery that there is a wall that\u2019s been put up around 13 years ago between the school and a IRGC compound that was nearby. If you\u2019d have been watching drone footage from above, as militaries have the capability to do, just for, you know, half an hour before or a few hours before, you would have seen, you know, that morning 170 girls dropped off by their parents, and that would have been identified as a nonmilitary target with clearly civilian usage.<\/p>\n<p>AMY GOODMAN: But let\u2019s get \u2014\u00a0<\/p>\n<p>CRAIG JONES: So, we don\u2019t yet \u2014<\/p>\n<p>AMY GOODMAN: Let\u2019s drill down into this, because, yes, there was this military facility right next to it. As you described, years ago, a wall was built between the two, so you\u2019ve got the school very clearly identified. But how does AI work, where you have this old, what, 10-year-old perhaps, information about it being a military base that\u2019s fed in, and then it is never updated? Where do human beings come into this?<\/p>\n<p>CRAIG JONES: Yeah, this is a really important question, where it, you know, gets tricky. But we could \u2014\u00a0we know a lot already. So, it looks like it\u2019s just an intelligence failure, that an area marked on a map, this is \u2014 you know, the whole entire area has been marked as a military compound. There is obligations, you know, legal obligations and ethical obligations, and just political obligations, within defense intelligence agencies to check this.<\/p>\n<p>And what happens is, some of these targets are nominated from U.S. military bases back in the United States. Some of those people I\u2019ve worked with over the last several years on what that \u2014\u00a0what they call target nomination, what it looks like. They hand that over to CENTCOM, who I know you cover. And they have bases in the Middle East. There\u2019s a central one based in Qatar, where these targeting decisions are executed. There is an obligation for CENTCOM to check and double-check that intelligence, that it\u2019s up to date, that everything\u2019s kosher on the target. It\u2019s clear that that was not done, whether that was \u2014\u00a0you know, there should be a human oversight of that, even if it\u2019s AI-recommended or even if it\u2019s human-recommended. There should be some human intelligence checking. It looks like, for whatever reason \u2014\u00a0and we don\u2019t yet know why.<\/p>\n<p>So, what happens also is a really interesting technicality, is everything in a society that the U.S. military is targeting de facto is labeled on a no-strike list, because everything is assumed to be civilian. And in order to strike it, you need to put it \u2014\u00a0get it off the no-strike list to be able to target it. So, the question here is: Why was this school taken off a no-strike list, deemed a legitimate military target? It looks like a combination of AI and human intelligence failure, to produce something, you know, truly catastrophic.<\/p>\n<p>AMY GOODMAN: And talk about how Palantir interacts with Claude, which is owned by Anthropic, especially for the Luddites who are listening all over, for people who don\u2019t quite understand how this all works.<\/p>\n<p>CRAIG JONES: So, yeah, from what we know, Palantir is a system, much like a deep software system that \u2014\u00a0you know, like a video game, that has all kinds of inputs, that you can look at targets. You have all kind of variables, like, you know: What size missile should we drop? What is the compound that we\u2019re looking at? What\u2019s it made out of? All these human \u2014 these variables with intelligence overlays. And then, in the same way that software works on a computer is the Claude is that thing which is in the background, which is kind of, you know, doing the processing of that data, making those recommendations. And then it provides the human some parameters that the human or operator or targeteer can then kind of play with.<\/p>\n<p>Obviously, it\u2019s highly sensitive and secretive, and beyond the very few people using it, you know, the designers even with Anthropic would be a very small amount of people who have the intelligence clearance and who\u2019ve seen this stuff working with sensitive military data. We know from some of the things they\u2019ve released, like the demos that they\u2019ve released, we can see some of what that looks like. And one of the most worrying developments that I\u2019ve seen, and from what\u2019s publicly available, is the lack of attention and ability to track civilian casualties within those programs. And that is something which we\u2019ve seen. You know, this war on lawyers and war on civilian casualty harm, you know, that the administrations have built for several years in the U.S. Department of Defense, has been eroded by the Trump administration, and you actually see that now programmed into the software.<\/p>\n<p>AMY GOODMAN: This is Palantir CEO Alex Karp, interviewed on CNBC last week.<\/p>\n<p>ALEX KARP: These technologies are dangerous societally. The only justification you could possibly have would be that if we don\u2019t do it, our adversaries and \u2014\u00a0will do it, and we will be subject to their rule of law. So, if you decouple this from the support of the military, you\u2019re going to have an enormous problem explaining to the American people why is it that we\u2019re absorbing the risk of disrupting the very fabric of our society, including the most powerful parts of our society, if it\u2019s not because it\u2019s about maintaining our ability to be American in the near term and long term.<\/p>\n<p>AMY GOODMAN: Craig Jones, if you can respond to the CEO of Palantir?<\/p>\n<p>CRAIG JONES: Palantir has a long history of making serious tens of millions, billions of profit from what ultimately I see as killing people in faraway lands that are too easy not to care about. I think this latest endeavor is as we\u2019ve kind of started this AI arms race. It\u2019s been good to see at least Anthropic throw their hands up and say, \u201cWe want some ethical parameters put on that.\u201d But even that, which seems to be, you know \u2014\u00a0and meanwhile, as that whole controversy has been playing out, as you covered, with the Trump administration, we see Sam Altman from OpenAI rush in and take the contract that Anthropic has ultimately dropped.<\/p>\n<p>Huge profits. They\u2019re a huge \u2014\u00a0the DOD is a \u2014\u00a0Department of War is a huge customer for many Silicon Valley firms. We\u2019ve seen Microsoft use their platforms for the Israeli targeting. Apparently, Microsoft are looking into that. We see Google AI analytics also used for Palantir and for U.S. DOD contracts. This is huge money. And I think, you know, should the Silicon Valley community wake up to, ultimately, the consequences of the technologies which they\u2019re working on, and see their effects on the ground \u2014\u00a0which is where I work, with the people who have lost entire families, who\u2019ve had their homes destroyed, who\u2019ve been displaced, who have, you know, had their legs blown off \u2014\u00a0there\u2019s this real disconnect between those tens of billions being made for profits of war and those people who suffer its consequences.<\/p>\n<p>AMY GOODMAN: This is OpenAI CEO Sam Altman, who you mentioned, speaking at the India AI Impact Summit in New Delhi in February.<\/p>\n<p>SAM ALTMAN: We don\u2019t yet know how to think about some superintelligence being aligned with dictators in totalitarian countries. We don\u2019t know how to think about countries using AI to fight new kinds of war with each other. We don\u2019t know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it\u2019s important to have more understanding and societywide debate, before we\u2019re all surprised.<\/p>\n<p>AMY GOODMAN: So, that\u2019s Sam Altman of OpenAI. And just quoting the Pentagon secretary \u2014\u00a0Trump calls him the war secretary \u2014\u00a0the defense secretary, Pete Hegseth, at a briefing in the last days, \u201cUnlike so many of our traditional allies who wring their hands and clutch their pearls, hemming and hawing about the use of force, America, regardless of what \u2026 international institutions say, is unleashing the most lethal and precise air power campaign in history \u2014\u00a0B-2s, fighters, drones, missiles and, of course, classified effects \u2014\u00a0all on our terms with maximum authorities. No stupid rules of engagement, no nation-building quagmire, no democracy-building exercise, no politically correct wars.\u201d Craig Jones?<\/p>\n<p>CRAIG JONES: Those are two jarring statements. Sam Altman\u2019s, you know, ideas, I think, in what he said, he\u2019s right. We don\u2019t know about this. We don\u2019t know about that, what the future holds. My view would be that because we don\u2019t know the potential dangers, risks and damages that these technologies brings, we should pause, as societies, as companies, as nations, as leaders, to have a serious conversation about what kind of AI future we want, whether this is a world that we want to build.<\/p>\n<p>Meanwhile, Hegseth, the Department of War, in January, released a statement \u2014\u00a0a whole program, actually, called the AI warfare fighter strategy, which some of the quote that you\u2019ve just read comes from. And it talks about maximum lethality, as you say, out with the rules of engagement. This is a deliberate sidelining of the checks and balances, accountabilities for war, the firing of military lawyers, who are the community that I\u2019ve worked with, that give legal advice to militaries, and just going ahead with it and saying \u2014\u00a0Hegseth saying explicitly, you know, just because we don\u2019t know how these technologies work, we need this first mover advantage. And it\u2019s that classic move fast and break things, and, you know, we don\u2019t care about the consequences. These are really worrying times and developments.<\/p>\n<p>AMY GOODMAN: You\u2019ve referred to the war lawyers several times, and it\u2019s the title of your book. Explain what you mean and how they\u2019ve been fired and sidelined.<\/p>\n<p>CRAIG JONES: So, these military lawyers have been, you know, fighting alongside militaries for centuries. In fact, the U.S. corps is the oldest law firm in America. And they do all kinds of things, but the thing that I\u2019ve been interested in is they\u2019re giving advice to military commanders and decision-makers for operations. So, any time a single target has struck in the last couple of decades, you would have a military lawyer present looking at things, doing what\u2019s called a proportionality calculation. So, OK, here\u2019s the military target. What\u2019s the risk of civilians? Should we go ahead? Should we pause? Are there certain measures we can take to avoid civilian casualties? And a host of other considerations. You know, one would be the girls\u2019 school. Is this a legitimate target, or is this indeed a girls\u2019 school? So, that\u2019s military necessity. And, you know, they\u2019ve had a long history. And, you know, I work with them. These are professional, serious people, educated at the best law schools throughout America. They\u2019re also soldiers. Israel has its own version of them. And, you know, they\u2019ve done credible, credible work with militaries.<\/p>\n<p>And the Trump administration, one of the first acts that he does after he\u2019s sworn in in his second term is to fire the heads of those legal units. So, you know, the Navy, the Army, the Air Force, each have their own heads. He fired them. And then further down the ranks, he fired and replaced them with yes men. And beyond the firing and replacing, we are hearing from reporting and from some of my own contacts that the military lawyers are either just not being listened to when they raise objections, or, you know, they\u2019re becoming silent in these war rooms where these decisions are made, because much like in the Trump administration, where his, you know, civilians and his advisers are around him, unless you say yes and go along with it, you\u2019re simply not welcome there, and you\u2019ll either be fired or not listened to.<\/p>\n<p>And so, again, you know, seriously worrying, especially when you put that alongside this simultaneous war on all these civilian casualty initiatives. So, there was something called the Center of Excellence, which was to do with civilian protection. It\u2019s been a decade in the making. Lots of senior people in the U.S. administrations, from Obama to Biden through Trump term one, have been involved in that. And Trump presses control-alt-delete on day one and gets rid of the civilian center, because they\u2019re not interested in avoiding civilian casualties, which feels like we\u2019re harkening back to Vietnam or something.<\/p>\n<p>AMY GOODMAN: Finally, Craig Jones, we just have a minute, but if you can explain this rift between Anthropic and the Pentagon, Anthropic saying its company could not be used for mass surveillance of Americans and for fully autonomous weapons, and then the Trump administration retaliating \u2014\u00a0after they sued, President Trump ordering federal agencies to stop using Anthropic products, Pete Hegseth declaring the firm a supply chain risk? But then we hear that Claude, owned by Anthropic, is possibly used by Palantir in targeting this girls\u2019 school that killed well over a hundred girls.<\/p>\n<p>CRAIG JONES: Yeah, there\u2019s lots to say here. One is that, you know, that seems like a disproportionate act, when a military \u2014\u00a0when a company just, you know, exercises its right to disagree with what the government is doing. And, you know, I think the CEO at the time said, you know, \u201cDisagreeing with the government is as American as American pie.\u201d But the other thing is that this is infrastructure. I think some people think, you know, AI is just a tool. You know, it\u2019s something on your desk, or it\u2019s something in the background. You just press delete. It\u2019s infrastructure that\u2019s embedded in the entire, you know, intelligence apparatus, and so, therefore, you can\u2019t just delete it, so, hence why it\u2019s still used, hence why it might take up to six months to try and get some of Claude products out of the \u2014\u00a0out of the software.<\/p>\n<p>The other thing is, you know, it was good to see that ethical objection. It seems like the only, you know, moral stance which has been taken on these conversations in the AI war, certainly in Silicon Valley. I would just object to their objection on two principles. They\u2019re against mass surveillance of U.S. citizens only. They say nothing about citizens around the world. And partly, their objection to its use for lethality is a technical, rather than moral, objection. It\u2019s to say right now the algorithms are not quite good enough, because they have this error rate. But they\u2019re not necessarily saying that they wouldn\u2019t go along with that use later on. So, it\u2019s not that they\u2019re against lethality and killing, per se, but that just technically the algorithms are not quite ready, and so they wanted to press pause. So, there\u2019s lots to say about that, but it is a disproportionate act and response by the Trump administration, I think.<\/p>\n<p>AMY GOODMAN: Craig Jones, we want to thank you for being with us, senior lecturer on political geography at Newcastle University, joining us from the U.K., the author of The War Lawyers: The United States, Israel, and Juridical Warfare, expert on modern warfare and aerial targeting, currently leading a research project on civilians casualties and war-related injury in Gaza and Iraq. We\u2019ll link to your <a href=\"https:\/\/theconversation.com\/iran-war-shows-how-ai-speeds-up-military-kill-chains-278492\" rel=\"nofollow noopener\" target=\"_blank\">piece<\/a> in The Conversation, \u201cIran war shows how AI speeds up military &#8216;kill chains.&#8217;\u201d<\/p>\n<p>Coming up, the director of the National Counterterrorism Center has resigned over the war in Iran, magnifying a rift within the MAGA movement over the war. Stay with us.<\/p>\n<p>[break]<\/p>\n<p>AMY GOODMAN: \u201cWelcome to the New World.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"This is a rush transcript. Copy may not be in its final form. AMY GOODMAN: As the U.S.&hellip;\n","protected":false},"author":2,"featured_media":532641,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-532640","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/532640","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=532640"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/532640\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/532641"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=532640"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=532640"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=532640"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}