The next blackout to plunge a G20 nation into chaos might not come courtesy of cybercriminals or bad weather, but from an AI system tripping over its own shoelaces.

Analyst firm Gartner warned this week that misconfigured artificial intelligence embedded in national infrastructure could shut down critical services in a major economy as soon as 2028, delivering the kind of disruption usually blamed on hostile governments or catastrophic natural events. The prediction centers on the rapid adoption of AI in cyber-physical systems, which Gartner defines as “systems that orchestrate sensing, computation, control, networking, and analytics to interact with the physical world (including humans).”

Gartner’s warning isn’t really about attackers taking over AI tools – it’s about what happens when everything is working as intended… until it isn’t. More operators are allowing machine learning systems to make real-time decisions, and those systems can respond unpredictably if a setting is changed, an update is pushed, or flawed data is entered.

Unlike traditional software bugs that might crash a server or scramble a database, errors in AI-driven control systems can spill into the physical world, triggering equipment failures, forcing shutdowns, or destabilizing entire supply chains, Gartner warns. 

“The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” cautioned Wam Voster, VP Analyst at Gartner.

Power grids are an obvious stress test. Energy firms now rely heavily on AI to monitor supply, demand, and renewable generation. If the software malfunctions or misreads data, sections of the network could go dark, and repairing damaged grid hardware is rarely a quick process. The same creeping automation is turning up in factories, transport systems, and robotics, where AI is slowly taking over decisions that used to involve a human looking mildly concerned at a dashboard.

Gartner’s bigger worry is how quickly AI is being deployed where mistakes don’t just crash software; they break real things. AI is turning up in systems where failures can shut down physical infrastructure, yet the models themselves aren’t always fully understood, even by the teams building them. That makes it difficult to predict how they’ll react when something unexpected happens or when routine updates are released. 

“Modern AI models are so complex they often resemble black boxes,” said Voster. “Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed.”

While regulators have spent years focusing on cybersecurity threats to operational technology, Gartner’s forecast suggests the next wave of infrastructure risk could be self-inflicted rather than adversary-driven. ®