Summary: The line between biology and computer science just got even blurrier. Researchers have successfully trained living rat neurons to perform complex machine learning tasks. The study integrated cultured neuronal networks into a “reservoir computing” framework.
Using a technique called FORCE learning, the team taught these biological circuits to generate intricate mathematical patterns—including the chaotic Lorenz attractor—proving that living “wetware” can serve as a functional, real-time computational resource.
Key Facts
Reservoir Computing: This framework uses the “natural” messiness and complexity of a network (the reservoir) to process data. Instead of training every single neuron, scientists only train the “readout” layer that interprets the network’s activity.FORCE Learning: A method used to adjust output signals in real-time based on errors. This is the first time it has been successfully applied to a Biological Neural Network (BNN) to generate time-series data.The “Chaos” Test: The living neurons didn’t just learn simple sine waves; they successfully reproduced the Lorenz attractor, a complex set of equations used to model chaotic systems like weather patterns.Microfluidic Precision: Researchers used tiny “plumbing” (microfluidics) to guide how the neurons grew. By creating modular “neighborhoods” of cells, they prevented the neurons from all firing at once (synchronization), which is critical for high-level computing.Versatility: The same biological system was flexible enough to learn waves with periods ranging from 4 to 30 seconds, demonstrating that living networks are remarkably adaptable.
Source: Tohoku University
A research team at Tohoku University and Future University Hakodate has demonstrated that living biological neurons can be trained to perform a supervised temporal pattern learning task previously carried out by artificial systems.
By integrating cultured neuronal networks into a machine learning framework, the team showed that these biological systems can generate complex time-series signals, marking a significant step forward in both neuroscience and bio-inspired computing.
The study was published online in Proceedings of the National Academy of Sciences (PNAS) on March 12, 2026, highlighting a novel intersection between living neural systems and computational technology. The findings suggest that biological neural networks (BNNs) may serve as viable alternatives or complements to existing machine learning models.
Artificial neural networks (ANNs) and spiking neural networks (SNNs) have long been used in machine learning and neuromorphic hardware. A framework known as reservoir computing has emerged as an efficient approach for processing time-dependent data by leveraging the dynamic properties of recurrently connected ANNs and SNNs.
In conventional ANN-based reservoir computing, methods such as First-Order Reduced and Controlled Error (FORCE) learning enable real-time adaptation by continuously adjusting output signals in response to errors.
These techniques allow artificial systems to generate a wide range of temporal patterns, including periodic and chaotic signals. However, whether similar approaches could be applied to biological neural networks has remained an open question.
To address this gap, the researchers constructed biological neural networks using cultured rat cortical neurons and incorporated them into a reservoir computing framework.
By applying FORCE learning to optimize the system’s readout layer, the team successfully trained the biological networks to produce complex temporal signals comparable to those involved in motor control.
A key innovation in the study was the use of microfluidic devices to precisely guide neuronal growth and control network connectivity. This approach enabled the researchers to create modular network architectures that minimized excessive synchronization, thereby promoting the rich, high-dimensional dynamics required for effective reservoir computing.
Using this system, the BNN-based framework was able to generate a variety of time-series patterns, including sine waves, triangular waves, square waves, and even chaotic trajectories such as the Lorenz attractor. Notably, the network demonstrated flexibility by learning and stably reproducing sine waves with periods ranging from 4 to 30 seconds within the same system.
“This work shows that living neuronal networks are not only biologically meaningful systems but may also serve as novel computational resources,” said Hideaki Yamamoto, a professor at Tohoku University.
“By bridging neuroscience and machine learning, we are opening a pathway toward new forms of computing that leverage the intrinsic dynamics of biological systems.”
Looking ahead, the research team aims to improve the stability of signal generation after training has concluded. Future efforts will focus on reducing feedback delays and refining the FORCE learning algorithm. In parallel, the platform may be expanded into a microphysiological system for studying drug responses and modeling neurological disorders, further extending its impact across both scientific and medical fields.
Key Questions Answered:Q: Are we basically building “Cyborg” computers now?
A: We’re moving in that direction! This is called “Wetware Computing.” Unlike traditional silicon chips, these biological reservoirs use the intrinsic, “noisy” physics of living cells to solve problems. They are incredibly energy-efficient and can adapt to new information in ways that rigid AI models often struggle with.
Q: How do you “teach” a dish of cells to do math?
A: It’s like a conductor leading an orchestra. The “reservoir” of neurons is already playing a million different notes. The researchers use FORCE learning to listen to those notes and “reward” the ones that fit the pattern they want (like a sine wave). Over time, the output layer learns exactly which neurons to “listen” to to get the right result.
Q: What is the benefit of using real neurons over a standard AI?
A: Biology is the ultimate master of parallel processing. A single biological network can handle massive amounts of time-dependent data with very little power. Additionally, these systems could be used to test how drugs affect “thinking” circuits or to model neurological diseases in a dish without needing animal testing.
Editorial Notes:This article was edited by a Neuroscience News editor.Journal paper reviewed in full.Additional context added by our staff.About this AI and neuroscience research news
Author: Public Relations Office
Source: Tohoku University
Contact: Public Relations Office – Tohoku University
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Online supervised learning of temporal patterns in biological neural networks under feedback control” by Yuki Sono, Hideaki Yamamoto, Yusei Nishi, Takuma Sumi, Yuya Sato, Ayumi Hirano-Iwata, Yuichi Katori, and Shigeo Sato. PNAS
DOI:10.1073/pnas.2521560123
Abstract
Online supervised learning of temporal patterns in biological neural networks under feedback control
In vitro biological neural networks (BNNs) provide well-defined model systems for constructively investigating how living cells interact with their environments to shape high-dimensional dynamics that can be used to generate coherent temporal outputs, such as those required for motor control.
Here, we develop a real-time closed-loop BNN system that is capable of generating periodic and chaotic temporal signals by integrating cultured cortical neurons with microfluidic devices and high-density microelectrode arrays.
We show that training a simple linear decoder with fixed feedback weights enables the system to learn and autonomously generate diverse temporal patterns. When feedback is switched on, the irregular activity in the BNNs is transformed into low-dimensional, structured dynamics, producing coherent trajectories that are characterized by stable transitions between different neural states.
BNNs trained on various target frequencies—ranging from 4 to 30 s—can be trained to sustain oscillations at distinct frequencies, demonstrating their adaptability. Importantly, top–down control of the self-organized network formation with microfluidic devices is the key to suppressing excessive synchronization and increasing dynamic complexity in BNNs, facilitating the training process and the generation of robust outputs.
This work offers a biologically inspired platform for understanding the physical basis of cortical computations and for advancing energy-efficient neuromorphic computing paradigms.