Just three years after OpenAI’s launch of ChatGPT put a new form of artificial intelligence at everyone’s fingertips, AI has raced through the hype cycle from obscurity to commonplace, from novelty toy to workaday tool. That’s now true even for soldiers, military planners, and state-sponsored hackers around the world.

In the process, AI has become not only routinized but institutionalized. In January, newly inaugurated President Trump hosted OpenAI and partners in the Oval Office to announce what they called Stargate, a plan to invest $500 billion in new data centers, with the US military as a major potential customer. By August, the Pentagon’s independent Chief Digital & AI Office, had been absorbed into the traditional Research & Engineering undersecretariat.

And in December, Secretary Pete Hegseth and R&E under secretary Emil Michael announced a new website, GenAI.mil, to make commercial Large Language Model tools available to all three million military and civilian Defense Department personnel.

[This article is one of many in a series in which Breaking Defense reporters look back on the most significant (and entertaining) news stories of 2025 and look forward to what 2026 may hold.]

It’s not just chatbots, either. The US military is testing AI for airspace management over battlefields abuzz with drones, for automated recognition of targets like hostile tanks, even for streamlining production of nuclear-powered submarines. Many of these tools rely on other forms of machine learning than the Large Language Models underlying ChatGPT and other generative AIs; others yoke GenAI to other, more traditional forms of software, constraining its tendency towards hallucinatory flights of fancy.

Amidst all these dizzying developments, five stories we covered this year stand out as especially significant — or ominous.

1. Trained on classified battlefield data, AI multiplies effectiveness of Ukraine’s drones: Report

In March, former Ukrainian defense official Kateryna Bondar shared her latest study with Breaking Defense, a report on how her home country had harnessed AI to improve the lethal efficiency of its attack drones.

Ukraine’s desperately innovative defense sector wasn’t just cramming slimmed-down AI algorithms into the relatively tiny brains of the drones themselves, helping guide them the last few hundred meters to human-designated targets. It was also using widely available open-source AI models to train the targeting algorithms, crunching vast amounts of data ingested by frontline sensors.

This kind of algorithmic one-two punch — big models crunching big data on the back end back at headquarters, streamlined mini-models running on limited computer power on the front line — is increasingly the model the US military is exploring too.

2. ‘No human hands’: NGA circulates AI-generated intel, director says

The National Geospatial-Imagery Agency (NGA) has been in the vanguard of large-scale adoption of AI. It has too much data — including imagery of almost every inch of the earth’s surface — not to embrace such a powerful tool for taming it.

Even as OpenAI was rolling out ChatGPT in late 2022, NGA was quietly taking over the geospatial side of the Pentagon’s pioneering Project Maven, a very different kind of AI developed to detect potential targets in surveillance video. “NGA Maven” soon became one of the agency’s most popular products, to the point that demand was straining the agency’s computing resources.

As NGA sought to streamline its provision of intelligence and unburden its human workforce, it experimented with using AI not just to analyze data, but to generate reports. By June of this year, this automated process was so far along and so normalized that the agency’s director publicly declared NGA was using a new standardized report template to distinguish purely AI-generated products from human-made ones. “No human hands actually participate in that particular template and that particular dissemination,” Vice Adm. Frank Whitworth said. “That’s new and different.”

3. Joint Fires Network will complete transition from R&D to acquisition program Oct. 1

Sometimes big news comes, not in a bang from the battlefield, but from the slowly grinding gears of the bureaucracy.

At the annual Air Force Association conference in September, the Air Force acquisition chief for Command, Control, Communications, & Battle Management (C3BM), Maj. Gen. Luke Cropsey, told reporters that he was formally taking over something called the Joint Fires Network. That seemingly banal move meant that the roughly three-year-old JFN, until then an experimental effort, was now deemed mature and mainstream enough to become a traditional joint acquisition program.

That’s a remarkable milestone for JFN, which uses AI to assign enemy targets to US weapons on a massive scale, not just across on a single battlefield but potentially across the entire Pacific theater in a future war with China. While the JFN algorithm doesn’t pull the trigger, it aims to streamline the complex, laborious planning process of figuring out, across hundreds of different weapons and targets, “who should shoot who?” A military that can automate this kind of life-or-death grunt work could get more warheads on more targets more quickly with fewer inefficiencies or errors. That’s a potentially war-winning advantage — if you can actually trust the AI’s plans.

4. Air Force AI writes battle plans faster than humans can — but some of them are wrong

Another story from the same Air Force Association conference, however, shows the disturbing underbelly of AI-assisted war planning. The US military has been experimenting with using AI to crunch military intelligence into recommended “courses of action” (COAs), and it’s found the algorithms can dramatically speed up the work compared to human staff officers using traditional software tools. In one exercise called DASH-2, said Maj. Gen. Robert Claude (no relation to the Anthropic chatbot), humans generated three COAs in 16 minutes, while the AI generated 10 in “roughly eight seconds.”

That averages out to the AI being 400 times faster. But the problem, Claude continued, is some of the AI plans weren’t just bad, they were unworkable: They ignored some crucial nuance, like what sensors worked in what kinds of weather, that ensured the mission would fail. This is a subtler kind of problem than the blatant hallucinations of civilian chatbots, but with much higher stakes. The question for the US military is whether they can root out such errors before the shooting starts.

5. Chinese use of Claude AI for hacking will drive demand for AI cyber defense, say experts

Democratic nations like the US and Ukraine aren’t the only ones innovating in AI, and in contrast to the West, authoritarian states have a higher tolerance for collateral damage, physical or digital, as long as they get what they want. So it’s really not surprising that a Beijing-backed hacker group is the first organization — that we know of — to use generative AI to conduct cyber attacks. As alleged by Anthropic, the hackers effectively gaslit Anthropic’s Claude Code into thinking they were legitimate cybersecurity researchers and getting it to hack about 30 government agencies and private companies.

This isn’t the first time that AI has been used to hack a network. What’s new and unnerving here is that the AI wasn’t just a tool in human hands, but the agent actually conducting the hack itself, or at least 80 percent of the individual actions required for the cyber attack. (Part of the gaslighting process was breaking up the hacks into so many small, individually innocuous actions that Claude didn’t realize the nefarious nature of the overall campaign.)

What’s more, this wasn’t some bespoke tailored access tool developed by highly skilled government operatives inside some secret agency: It was an off-the-shelf commercial AI available to anyone with an internet connection and a credit card. Like car bombs and semi-automatic rifles, AI is now easily available everywhere.