AI-generated illustration. Credit: ZME Science/Nanobanana.
When you swing a tennis racket or catch a set of keys, you aren’t thinking about wind resistance or gravity. Yet, to perform that motion, your brain is solving a massive physics problem in milliseconds. It is processing the same kind of complex math that typically demands a warehouse-sized supercomputer.
Researchers Brad Theilman and James Aimone from Sandia National Laboratories have now demonstrated that neuromorphic hardware can bridge the gap between the efficiency of the human brain and the energy gulping of computer mainframes. They essentially showed that neuromorphic hardware — chips designed to emulate the sparse, asynchronous communication of biological brains — can directly solve the complex partial differential equations (PDEs) that underpin our understanding of the physical world and form the bedrock of scientific simulation.
By translating the trusted mathematics of structural mechanics into the language of spiking neurons, the team has opened a backdoor to energy-efficient supercomputing that looks less like a processor of ones and zeroes and more like a living mind.
The Problem with Simulating the World
Whether forecasting a hurricane’s path or testing a nuclear warhead, scientists rely on PDEs.
To solve these on a computer, engineers use the Finite Element Method (FEM). They take a complex shape — say, an airplane wing — and break it down into a “mesh” of millions of tiny, simple geometric elements. Solving the math for these millions of elements requires massive supercomputers that guzzle electricity and generate immense heat.
The huge energy expenditure is partly owed to the way computer architecture is currently designed. Traditional chips spend vast amounts of energy shuttling numbers back and forth between memory and processors. The brain, however, doesn’t work that way. It keeps memory and computation together, distributed across billions of neurons.
“We’re just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly,” says Brad Theilman, a computational neuroscientist at Sandia.
The “NeuroFEM” Breakthrough
Theilman and his colleague James Aimone didn’t try to train a neural network to guess the answer to physics problems, as many deep learning AI models do. Instead, they found a way to translate the exact mathematics of the Finite Element Method into a Spiking Neural Network (SNN).
×
Thank you! One more thing…
Please check your inbox and confirm your subscription.
They call it NeuroFEM.
In their system, the mesh of the physical object is mapped onto a mesh of neurons. Instead of passing complex floating-point numbers (like 3.14159) back and forth, these neurons communicate via “spikes”—tiny, binary pulses of electricity, and are meant to mimic biological neural spiking.
It functions like a microscopic tug-of-war. For every point in the mesh, a small population of neurons receives input and “spikes” to signal a value. Half the neurons push the value positive, and half push it negative. Through this rapid-fire, asynchronous communication, the network naturally flows toward a balance point. That balance point is the solution to the equation.
“You can solve real physics problems with brain-like computation,” Aimone says. “That’s something you wouldn’t expect because people’s intuition goes the opposite way. And in fact, that intuition is often wrong”.
Silicon That Scales
To prove this wasn’t just a blackboard theory, the team ran their algorithm on Intel’s Loihi 2, a cutting-edge neuromorphic chip.
The results were startlingly efficient. The researchers found that their algorithm exhibited “close to ideal scaling”. In traditional computing, adding more processors often yields diminishing returns due to data traffic jams. As you add more, you run out of low-hanging fruit and the whole setup becomes increasingly economically unviable. But with NeuroFEM on Loihi 2, doubling the number of cores nearly halved the time required to solve the problem.
Conversely, the energy cost to reach a solution was significantly lower than running the same math on a standard CPU. As the problems get larger and more complex, this energy advantage is expected to grow.
From Tennis Balls to Warheads
Why does a chip designed to mimic the brain excel at physics? It turns out, your brain is doing this kind of math all the time.
“Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball,” Aimone explains. “These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply”.
The algorithm they used is actually based on a model of the brain’s motor cortex. The same neural architecture evolution built to control your arm movement is mathematically perfectly suited to simulate the bending of a steel beam. That’s a pretty wild thought.
This has massive implications for the National Nuclear Security Administration (NNSA), which funded the work. The NNSA relies on massive simulations to maintain the nuclear deterrent without physically detonating hydrogen bombs.
“Neuromorphic computing may provide a way to significantly cut energy use while still delivering strong computational performance,” effectively allowing for larger, faster simulations on a smaller power budget, according to the researchers.
The “Neuromorphic Twin”
Perhaps the most exciting application is the concept of the “neuromorphic twin”.
Because these chips are low-power and process data in real-time spikes, they could be embedded directly into physical structures, like a bridge or a turbine. The chip could run a continuous simulation of the object it is embedded in and the forces that act upon it in real-time, updating instantly based on sensor data to predict structural failure before it happens.
The team even demonstrated that their network could handle complex 3D shapes, such as a hollow sphere deforming under gravity, proving it can handle the messy, unstructured geometry of the real world.
One of the biggest criticisms of modern AI in science is the “black box” problem. We often don’t know how an AI gets its answer. NeuroFEM is different.
“If we’ve already shown that we can import this relatively basic but fundamental applied math algorithm into neuromorphic — is there a corresponding neuromorphic formulation for even more advanced applied math techniques?” Theilman asks.
As development continues, the researchers are optimistic. “We have a foot in the door for understanding the scientific questions, but also we have something that solves a real problem,” Theilman added.
The findings appeared in the journal Nature Machine Intelligence.
