A team at Tohoku University and Future University Hakodate in Japan trained cultured rat cortical neurons to autonomously generate complex temporal signals using a real-time machine learning framework, according to a study published March 12 in the journal Proceedings of the National Academy of Sciences. The researchers integrated the living neurons with high-density microelectrode arrays and microfluidic devices, creating a closed-loop reservoir computing system that learned to produce periodic and chaotic waveforms without any external input.
Go deeper with TH Premium: AI and data centers
(Image credit: Microsoft)
The enabling technology, per the researchers, was the use of PDMS microfluidic films to constrain how the neurons connected. Without physical constraints, cultured neurons form dense, highly synchronized networks that fire in lockstep, and these homogeneous networks failed to learn any of the target signals.
Article continues below
You may like
Instead, the researchers confined neuronal cell bodies to 128 square wells, each roughly 100×100 micrometers, with each well holding an average of 14.6 neurons. The wells were linked by microchannels in two configurations: a lattice design with uniform nearest-neighbor connections, and a hierarchical design with sparser, multi-scale connections.
Both patterned configurations dramatically reduced pairwise neural correlations compared to unpatterned cultures (0.11 and 0.12 versus 0.45, respectively), increasing the dimensionality of the network’s dynamics. Lattice networks consistently outperformed hierarchical ones across all target waveforms, likely because their denser intermodular connections produced higher firing rates that gave the linear decoder more signal to work with.
Tests showed rat brain neurons are ‘novel computational resources’
Using the lattice and hierarchical networks, the system learned to generate sine waves with periods of 4, 10, and 30 seconds, as well as triangle and square waves, and the same culture preparation could be retrained to oscillate at different frequencies. The researchers also demonstrated that the system could approximate a Lorenz attractor, a three-dimensional chaotic trajectory, with pairwise correlations above 0.8 between predicted and target signals across all dimensions during the learning phase.
“This work shows that living neuronal networks are not only biologically meaningful systems but may also serve as novel computational resources,” said Hideaki Yamamoto, a professor at Tohoku University’s Research Institute of Electrical Communication, in a press release published on the institution’s website.
Performance degraded after training was halted and the system ran autonomously, with mean squared error increasing in 99% of trials. The feedback loop’s roughly 330-millisecond latency also limited the system’s ability to track fast-changing or sharp-edged waveforms. The researchers noted that reducing this delay through specialized hardware or alternative filtering could expand the range of learnable targets, with future applications potentially extending to brain-machine interfaces and neuroprosthetic devices.
Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.