Researchers at the Lawrence Berkeley National Laboratory have developed a design and training framework that allows computers to use thermal noise as a power source rather than a hindrance.

The study demonstrates that “thermodynamic computing” can now mimic neural networks to perform complex, nonlinear machine learning tasks. 

All this while operating at room temperature and utilizing the microscopic vibrations of electrons that traditional computers spend vast amounts of energy to suppress.

Flipping the script on thermal noise

For classical and quantum computing, heat is considered the enemy. Thermal noise, which is the random vibration of charge carriers like electrons, can scramble data and cause errors. 

To combat this, classical computers operate at high power scales to “drown out” the noise, while quantum computers often require extreme cooling to near absolute zero.

Thermodynamic computing inverts this paradigm. “Thermodynamic computing is noise-powered,” explains Stephen Whitelam, a staff scientist at the Molecular Foundry and co-author of the paper. 

“The premise of thermodynamic computing is that if you take a physical device with an energy scale comparable to that of thermal energy and leave it alone, it will change state over time, driven by thermal fluctuations. The goal is to program it so that this time evolution does something useful.”

Overcoming the “waiting” problem

Until now, thermodynamic computing faced two major roadblocks. The first hurdle involved equilibrium constraints, which meant researchers previously had to wait for a system to settle into its lowest-energy state before a calculation could be performed, a process that is often too slow for practical use. 

In addition to this, the field faced linear limitations where the technology was largely restricted to simple linear algebra. This makes it unsuitable for the complex, nonlinear demands of modern AI.

The team bypassed these hurdles using digital simulations. They proved that by using nonlinear components, a thermodynamic computer can be trained to perform calculations at specific times, regardless of whether the system has reached equilibrium. 

This allows the hardware to function more like a traditional processor—fast and predictable—but with a fraction of the power.

Training the “stochastic” brain

Because thermodynamic computers are “stochastic,” which means no two runs look exactly the same due to the random nature of heat, standard training methods for AI do not work.

To solve this, researcher Corneel Casert utilized the Perlmutter supercomputer at NERSC. Using 96 GPUs in parallel, Casert ran “evolutionary simulations” that evaluated over a trillion noisy trajectories. 

By using a genetic algorithm, the team was able to find the perfect parameters for a noise-powered system.

“Training a thermodynamic neural network by simulating it digitally is expensive,” added Casert. 

“But once trained and built as physical hardware, we can perform inference on that hardware for a very low energy cost.”

The future of low-power AI

This development’s implications are massive. A single Google search currently consumes enough energy to power a six-watt LED for three minutes. 

By shifting the heavy lifting of AI inference to thermodynamic hardware, that energy cost could plummet.

The Berkeley Lab team is now seeking experimental partners to translate these digital designs into physical hardware.