{"id":205454,"date":"2025-10-11T12:53:13","date_gmt":"2025-10-11T12:53:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/205454\/"},"modified":"2025-10-11T12:53:13","modified_gmt":"2025-10-11T12:53:13","slug":"ai-weapons-are-dangerous-in-war-but-saying-they-cant-be-held-accountable-misses-the-point","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/205454\/","title":{"rendered":"AI weapons are dangerous in war. But saying they can\u2019t be held accountable misses the point"},"content":{"rendered":"<p>In a speech to the United Nations Security Council last month, Australia\u2019s Minister for Foreign Affairs, Penny Wong, <a href=\"https:\/\/www.foreignminister.gov.au\/minister\/penny-wong\/speech\/statement-un-security-council-open-debate\" rel=\"nofollow noopener\" target=\"_blank\">took aim at artificial intelligence<\/a> (AI).<\/p>\n<p>While she said the technology \u201cheralds extraordinary promise\u201d in fields such as health and education, she also said its potential use in nuclear weapons and unmanned systems challenges the future of humanity:<\/p>\n<p>Nuclear warfare has so far been constrained by human judgement. By leaders who bear responsibility and by human conscience. AI has no such concern, nor can it be held accountable. These weapons threaten to change war itself and they risk escalation without warning.<\/p>\n<p>This idea \u2013 that AI warfare poses a unique threat \u2013 often <a href=\"https:\/\/news.un.org\/en\/story\/2025\/06\/1163891\" rel=\"nofollow noopener\" target=\"_blank\">features<\/a> in public calls to safeguard this technology. But it is clouded by various misrepresentations of both the technology and warfare. <\/p>\n<p>This raises the questions: will AI actually change the nature of warfare? And is it really unaccountable?<\/p>\n<p>How is AI being used in warfare?<\/p>\n<p>AI is by no means a new technology, with the <a href=\"https:\/\/theconversation.com\/ai-was-born-at-a-us-summer-camp-68-years-ago-heres-why-that-event-still-matters-today-237205\" rel=\"nofollow noopener\" target=\"_blank\">term originally coined in the 1950s<\/a>. It has now become an umbrella term that encompasses everything from large language models to computer vision to neural networks \u2013 all of which are very different.<\/p>\n<p>Generally speaking, applications of AI analyse patterns in data to infer, from inputs such as text prompts, how to generate outputs such as predictions, content, recommendations or decisions. But the underlying ways these systems are trained <a href=\"https:\/\/fpf.org\/wp-content\/uploads\/2021\/08\/FPF-AIEcosystem-Report-FINAL-Print.pdf\" rel=\"nofollow noopener\" target=\"_blank\">are not always comparable<\/a>, despite them all being labelled as \u201cAI\u201d.<\/p>\n<p>The use of AI in warfare ranges from <a href=\"https:\/\/www.act.nato.int\/article\/harnessing-artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">wargaming simulations<\/a> used for training soldiers, through to the more problematic AI decision-support systems used for targeting, such as the <a href=\"https:\/\/time.com\/7202584\/gaza-ukraine-ai-warfare\/\" rel=\"nofollow noopener\" target=\"_blank\">Israel Defence Force\u2019s use of the \u201cLavender\u201d system<\/a> which allegedly identifies suspected members of Hamas, or other armed groups.<\/p>\n<p>Broad discussions on AI in the military domain capture both of these examples, when it is only the latter which sits at the point of life-and-death decision making. It is this point which dominates most of the moral debates related to AI in the context of warfare. <\/p>\n<p>Is there really an accountability gap?<\/p>\n<p>Arguments on who, or what, is held liable when something goes wrong extend to both civil and military applications of AI. This predicament has been labelled an <a href=\"https:\/\/uxmag.medium.com\/the-ai-accountability-gap-00f2e7bc6e53\" rel=\"nofollow noopener\" target=\"_blank\">\u201caccountability gap\u201d<\/a>.<\/p>\n<p>Interestingly, this accountability gap \u2013 which is fuelled by <a href=\"https:\/\/news.harvard.edu\/gazette\/story\/2024\/01\/killer-robots-are-coming-and-u-n-is-worried\/\" rel=\"nofollow noopener\" target=\"_blank\">media reports<\/a> about \u201ckiller robots\u201d that make life-and-death decisions in war \u2013 is rarely debated when it comes to other technologies. <\/p>\n<p>For example, there are legacy weapons such as unguided missiles or landmines that involve no human oversight or control in what is the deadliest portion of their operation. Yet no one asks whether the unguided missile or landmine was at fault. <\/p>\n<p>Similarly, the <a href=\"https:\/\/www.abc.net.au\/news\/2025-09-04\/robodebt-victims-get-compensation-from-class-action\/105734030\" rel=\"nofollow noopener\" target=\"_blank\">Robodebt scandal<\/a> in Australia saw misfeasance on behalf of the federal government, not the automated system it relied on to tally debts. <\/p>\n<p>So why do we ask if AI is at fault?<\/p>\n<p>Like any other complex system, AI systems are designed, developed, acquired and deployed by humans. For military contexts, there is the added layer of <a href=\"https:\/\/apps.dtic.mil\/sti\/pdfs\/ADA369560.pdf\" rel=\"nofollow noopener\" target=\"_blank\">command and control<\/a>, a hierarchy of decision making to achieve military objectives. <\/p>\n<p>AI does not exist outside of this hierarchy. The idea of independent decision making, on the part of AI systems, is clouded by a misunderstanding of how these systems actually work \u2013 and by what processes and practices led to the system being used in different applications.<\/p>\n<p>While it\u2019s correct to say that AI systems cannot be held accountable, it\u2019s also superfluous. No inanimate object can or has ever been held accountable in any circumstance \u2013 be it an automated debt recovery system or a military weapon system. <\/p>\n<p>The argument of accountability on behalf of a system is neither here nor there, because ultimately, decisions, and the responsibilities of those decisions, always sit at the human level. <\/p>\n<p>It always comes back to humans<\/p>\n<p>All complex systems, including AI systems, exist across a <a href=\"https:\/\/testbankdeal.com\/sample\/systems-engineering-and-analysis-5th-edition-blanchard-solutions-manual.pdf\" rel=\"nofollow noopener\" target=\"_blank\">system lifecycle<\/a>: a structured and systematic process of taking a system from initial conception through to its ultimate retirement. <\/p>\n<p>Humans make conscious decisions across every stage of a lifecycle: planning, design, development, implementation, operation, maintenance. These decisions range from technical engineering requirements through to regulatory compliance and operational safeguards. <\/p>\n<p>What this lifecycle structure creates is a <a href=\"https:\/\/heinonline.org\/HOL\/LandingPage?handle=hein.journals\/cambrilv8&amp;div=6&amp;id=&amp;page=\" rel=\"nofollow noopener\" target=\"_blank\">chain of responsibility<\/a> with clear intervention points. <\/p>\n<p>This means, when an AI system is deployed, its characteristics \u2013 including its faults and limitations \u2013 are a product of cumulative human decision making. <\/p>\n<p>AI weapon systems used for targeting are not making decisions on life and death. The people who consciously chose to use that system in that context are. <\/p>\n<p>So when we talk about regulating AI weapon systems, really what we\u2019re regulating are the humans involved in the lifecycle of those systems.<\/p>\n<p>The idea of AI changing the nature of warfare clouds the reality of the roles humans play in military decision making. While this technology has and will continue to present challenges, those challenges seem always to come back to people.<\/p>\n","protected":false},"excerpt":{"rendered":"In a speech to the United Nations Security Council last month, Australia\u2019s Minister for Foreign Affairs, Penny Wong,&hellip;\n","protected":false},"author":2,"featured_media":205455,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-205454","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/205454","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=205454"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/205454\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/205455"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=205454"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=205454"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=205454"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}