4 min read
Here’s what you’ll learn when you read this story…
Researchers from the LNM Institute of Information Technology and the Indian Institute of Information Technology recently proposed a new model for AI—one that goes through a human life cycle.The team focused on pairing different AI systems with their functional equivalents in the human brain.By mimicking our own biological advantages, the researchers believe that AI could eventually become an ever-evolving assistant that learns to better suit individual users’ needs.
Freed from the bounds of biology, AI can learn from data at speeds incomprehensible to the human mind. And with that speed, AI can accomplish things that might normally take a human hours to complete, like analyzing a database of thousands of customer habits. But our brain, sculpted by millions upon millions of years of evolution, is more energy efficient, more adaptable, and can learn from a relatively limited pool of data—we don’t need to see thousands of images of horses to learn what a horse is.
To mimic these biological benefits, some scientists think we need to push AI more toward how the human brain itself functions. For some, that’s led to the idea of neuromorphic computing, which aims to replicate the human brain’s neural structures within computers; for others it means tearing down current AI architectures and starting from scratch.
But a new peer-reviewed article published earlier this year in the International Journal of Transdisciplinary Research and Perspectives is taking a slightly different approach. Researchers are pairing AI systems with parts of the human brain, while also giving the AIs their own version of a human life cycle. According to the paper, the new AI system develops a personality, sleeps, dreams—and eventually dies. This means AI could essentially become an assistant that progressively adapts to its user, rather than just a rigid input-output machine.
Part of the difficulty of building an AI to mimic the human brain is that scientists don’t really know how the human brain works, especially as it relates to consciousness.
In the paper, computer scientist Krrish Choudhary at the The LNM Institute of Information Technology in Jaipur, India, along with his coauthor, Tanvi Kandoi from the Indian Institute of Information Technology, use a neuroscience-inspired approach to replicate the human mind. They draw parallels between AI models and nearly two dozen brain structures, processes, hormones, and neurotransmitters. For example, the visual cortex could be paired with Google DeepMind’s vision-language model (VLM), PaliGemma. Here, “REM sleep” would play out via synthetic generation—or the AI generating text, images, and videos the same way our brains create scenes while dreaming.
“The architecture described in the paper… organizes intelligence into specialized subsystems, closely mirroring the functional layout of the brain,” Choudhary says in an email. “This differs sharply from neuromorphic computing… instead, our approach focuses on functional equivalence, using existing AI components to reconstruct the organizational logic of the brain.”
Choudhary founded the company Versace AGI to scale up these ideas into a working system. Other researchers seem to agree that the same framework that guides the human brain could work for AI. For instance, the Netherlands Institute for Neuroscience is exploring ways to teach AI to learn how the human brain learns. Likewise, scientists from Johns Hopkins University are investigating ways that AI can leverage the learning abilities of the human brain. Meanwhile, other startups are devising brain-like AI relying on tiny worms, which similarly requires little information to learn.
Of course, part of the difficulty of building an AI to mimic the human brain is that scientists don’t really know how the human brain works, especially as it relates to consciousness. While using ideas grounded in neuroscience—such as the three rules of multisensory integration and the free energy principle, which help explain perception and learning—Choudhary and Kandoi also focused on global workspace theory (GWT).
GWT is a leading theoretical framework in consciousness research. It attempts to describe human consciousness where modules of the brain (or distinct units of the brain network, such as vision or language) compete for attention. Choudhary argues that GWT is a good fit for this brain-based approach to AI computing because it functions as a working memory in AI systems. Other competing theories, like Integrated Information Theory, don’t work quite as well as they aren’t as focused on memory and learning, Choudhary says.
Is the Singularity Imminent?
But arguably the most intriguing idea in the paper is one where the AI goes through what is essentially a human life cycle. The idea is to create a model that “is born when started, develops personality through experience and reward, sleeps to consolidate memories into permanent knowledge, and dies when stopped,” according to the study. While our finite lives may seem like a drawback, the authors argue that this is a feature for AI systems—not a flaw.
Choudhary explains that it’s much more productive for AI to have a dynamic life cycle, rather than just rigid responses. “Intelligence requires persistence, which is why our architecture emphasizes long-term memory, episodic recall, and continuous self-adaptation,” he says.
The idea is that an AI model that grows similarly to how our brain grows could develop continuity—meaning it essentially doesn’t “forget” everything between requests—create memory stability, and form a personal identity. These qualities would help AI make accurate decisions just like humans. Crucially, for someone using such a system, the AI would feel less like a tool and more like an ever-evolving assistant that remembers and improves with time. In terms of our own biology, this might mean that AI would grow and improve over time, just like a real human brain.
Artificial general intelligence is often portrayed as a zero-sum game with AI eventually usurping the human brain on the throne of superintelligence. But as recent research is beginning to suggest, the road to useful AI may be a much more organic process—one that looks much less artificial and much more human.
Download Pop Mech Digital Issues
Darren lives in Portland, has a cat, and writes/edits about sci-fi and how our world works. You can find his previous stuff at Gizmodo and Paste if you look hard enough.