{"id":374103,"date":"2026-04-11T05:28:17","date_gmt":"2026-04-11T05:28:17","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/374103\/"},"modified":"2026-04-11T05:28:17","modified_gmt":"2026-04-11T05:28:17","slug":"the-ai-brain-that-gets-smarter-by-shrinking","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/374103\/","title":{"rendered":"The AI Brain That Gets Smarter by Shrinking"},"content":{"rendered":"<p>Summary: In the world of AI, bigger is usually seen as better\u2014but this leads to massive energy consumption and computational costs. Taking a cue from human biology, a research team has developed a brain-inspired \u201cselective pruning\u201d framework for Spiking Neural Networks (SNNs).<\/p>\n<p>The study reveals that AI doesn\u2019t need more connections to learn complex tasks; it needs the right ones. By mimicking how an infant\u2019s brain strengthens long-range links while \u201cpruning\u201d away local clutter, this new AI achieves continual learning\u2014mastering perception, motor control, and interaction\u2014while actually getting smaller and more energy-efficient over time.<\/p>\n<p>Key Facts<\/p>\n<p>The \u201cInfant\u201d Approach: Human brains don\u2019t just add connections; they refine them. This model follows a \u201csimple-to-complex\u201d trajectory, maturing primary modules (like perception) before moving on to higher cognition.Selective Pruning: Unlike traditional AI that freezes weights to prevent forgetting, this system introduces a feedback mechanism that actively inhibits and removes redundant local connections from earlier tasks.Knowledge Reuse: While local clutter is pruned, cross-regional \u201clong-range\u201d connections are strengthened. This allows the AI to reuse knowledge from old tasks to solve new ones without needing more \u201cbrain\u201d space.No More \u201cCatastrophic Forgetting\u201d: A major hurdle in AI is that learning something new often \u201cerases\u201d the old. This developmental framework mitigates that loss without using energy-heavy tricks like \u201cexperience replay.\u201dSustainably Evolving: The network scale is continuously reduced as learning progresses, offering a low-energy pathway toward General Cognitive Intelligence.<\/p>\n<p>Source: Science China Press<\/p>\n<p>How does artificial intelligence continue to improve its capabilities? <\/p>\n<p>For a long time, expanding model size has been regarded as an important way to enhance the performance of artificial neural networks, but it has also led to rising energy consumption and growing computational costs.<\/p>\n<p>  <img fetchpriority=\"high\" decoding=\"async\" width=\"1200\" height=\"800\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/04\/ai-shrinking-brain-neuroscience.jpg\" alt=\"This shows a brain with a network of lights.\"  \/> The research team found that brain-like dynamic changes\u2014selective inhibition and strengthening\u2014enable AI to acquire new capabilities in a low-energy, efficient manner. Credit: Neuroscience News<\/p>\n<p>In contrast, during development the human brain does not simply increase connection density; instead, it continuously gains new cognitive abilities through selective pruning.<\/p>\n<p>Inspired by these, the research team proposed a temporally developmental continual learning framework for spiking neural networks. By enabling the temporal establishment and reorganization of connections across different regions, the approach achieves continual learning from simple to complex across perception\u2013motor\u2013interaction tasks while network size is progressively reduced, offering a new pathway toward low-energy, sustainably evolving general cognitive intelligence.<\/p>\n<p>Temporally Development\u2013Inspired Continual Learning Mechanism<\/p>\n<p>Studies show that brain development follows clear temporal principles: neural connectivity first increases and then becomes refined, with cross-regional long-range connections gradually strengthening while local connections are selectively pruned.<\/p>\n<p>Primary brain regions mature earlier to support higher cognition, and feedback from higher cognitive functions in turn optimizes lower-level structures. Along this process, infants progressively acquire multiple cognitive functions from simple to complex. Building on these principles, the researchers proposed a temporally development-inspired continual learning method.<\/p>\n<p>The approach allows cognitive modules in spiking neural networks to grow progressively following the learning sequence of perception, motor control, and interaction, while evolving cross-regional long-range connections to promote knowledge reuse across tasks.<\/p>\n<p>At the same time, feedback mechanisms are introduced to inhibit and prune redundant local connections from earlier tasks, enabling the network to become increasingly compact as learning progresses.<\/p>\n<p>Energy-Efficient Cross-Domain Continual Learning<\/p>\n<p>The research team found that the proposed method demonstrates stable and strong continual learning performance across multiple cognitive domains, including perception, motor control, and interaction, and achieves leading results on several widely used continual learning benchmarks.<\/p>\n<p>Experimental results show that the model learns complex tasks progressively along a \u201csimple-to-complex\u201d trajectory, clearly outperforming direct training or direct pruning approaches.<\/p>\n<p>Even as the network scale is continuously reduced, the model effectively preserves memory of previously learned tasks, significantly mitigating catastrophic forgetting while continuing to acquire new cognitive capabilities.<\/p>\n<p>Further analysis indicates that this performance gain arises from brain-like dynamic changes within the network. As learning progresses, local connections first grow rapidly and are then selectively inhibited and pruned, reducing interference from irrelevant or outdated knowledge, while cross-regional long-range connections are continuously strengthened to support the selective reuse of prior knowledge with shared structure and semantics.<\/p>\n<p>Importantly, this process does not rely on conventional continual learning strategies such as regularization, experience replay, or weight freezing.<\/p>\n<p>The researchers note that this brain-inspired developmental mechanism enhances learning and memory in an efficient, low-energy manner, highlighting the potential of brain developmental principles to drive the next generation of artificial intelligence.<\/p>\n<p>Key Questions Answered:Q: If the AI is \u201cpruning\u201d connections, won\u2019t it forget what it learned first?<\/p>\n<p class=\"schema-faq-answer\">A: Surprisingly, no. In the human brain, we prune the \u201cstatic\u201d or redundant local noise to make the important long-range connections faster. This AI does the same: it deletes the specific \u201cclutter\u201d of an old task but keeps the high-level \u201cconcepts\u201d in its long-range network, allowing it to remember more with less.<\/p>\n<p>Q: Why is \u201cSpiking Neural Networks (SNNs)\u201d a big deal here?<\/p>\n<p class=\"schema-faq-answer\">A: SNNs are the most brain-like form of AI because they only process information in \u201cpulses\u201d (spikes) rather than constant data streams. Combining SNNs with \u201cselective pruning\u201d makes this one of the most energy-efficient AI models ever created.<\/p>\n<p>Q: How does this solve the AI energy crisis?<\/p>\n<p class=\"schema-faq-answer\">A: Currently, as AI gets smarter (like GPT-4), the hardware requirements and electricity bills skyrocket. This model proves that AI can follow a \u201cbiological growth curve\u201d\u2014where it actually requires less power and fewer parameters as it matures and becomes an expert.<\/p>\n<p>Editorial Notes:This article was edited by a Neuroscience News editor.Journal paper reviewed in full.Additional context added by our staff.About this AI and neuroscience research news<\/p>\n<p class=\"has-background\" style=\"background-color:#ffffe8\">Author:\u00a0<a href=\"http:\/\/neurosciencenews.com\/cdn-cgi\/l\/email-protection#bbc2dad5d9ded2fbc8d8d2d8d3d2d5da95d8d4d6\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Bei Yan<\/a><br \/>Source:\u00a0<a href=\"https:\/\/scichina.com\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Science China Press<\/a><br \/>Contact:\u00a0Bei Yan \u2013 Science China Press<br \/>Image:\u00a0The image is credited to Neuroscience News<\/p>\n<p class=\"has-background\" style=\"background-color:#ffffe8\">Original Research:\u00a0Open access.<br \/>\u201c<a href=\"https:\/\/doi.org\/10.1093\/nsr\/nwag066\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Continual Learning of Multiple Cognitive Functions with Brain-inspired Temporal Development Mechanism<\/a>\u201d by Bing Han, Feifei Zhao, Yinqian Sun, Wenxuan Pan, and Yi Zeng.\u00a0National Science Review<br \/>DOI:10.1093\/nsr\/nwag066<\/p>\n<p>Abstract<\/p>\n<p>Continual Learning of Multiple Cognitive Functions with Brain-inspired Temporal Development Mechanism<\/p>\n<p>Cognitive functions in current artificial intelligence networks are tied to the exponential increase in network scale, whereas the human brain can continuously learn hundreds of cognitive functions with remarkably low energy consumption.<\/p>\n<p>This advantage partly arises from the brain\u2019s cross-regional temporal development mechanisms, where the progressive formation, reorganization, and pruning of connections from basic to advanced regions, facilitate knowledge transfer and prevent network redundancy.<\/p>\n<p>Inspired by these, we propose the Continual Learning of Multiple Cognitive Functions with Brain-inspired Temporal Development Mechanism(TD-MCL), enabling cognitive enhancement from simple to complex in Perception-Motor-Interaction (PMI) tasks.<\/p>\n<p>The model drives sequential evolution of long-range inter-module connections to facilitate positive knowledge transfer, and uses feedback-guided local inhibition\/pruning to eliminate prior task redundancies, reducing energy consumption while preserving acquired knowledge.<\/p>\n<p>Experiments on the proposed cross-domain PMI dataset and general datasets (CIFAR100, ImageNet) show that the proposed method can achieve continual learning capabilities while reducing network scale, without introducing regularization, replay, or freezing strategies, and achieving superior accuracy on new tasks compared to direct learning.<\/p>\n<p>The proposed method shows that the brain\u2019s developmental mechanisms offer a valuable reference for exploring biologically plausible, low-energy enhancements of general cognitive abilities.<\/p>\n","protected":false},"excerpt":{"rendered":"Summary: In the world of AI, bigger is usually seen as better\u2014but this leads to massive energy consumption&hellip;\n","protected":false},"author":2,"featured_media":374104,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,77644,195176,195177,7951,2489,195178,6570,111,139,69,186097,195179,195180,145],"class_list":{"0":"post-374103","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-brain-inspired-ai","12":"tag-cognitive-intelligence","13":"tag-continual-learning","14":"tag-energy-efficient-ai","15":"tag-machine-learning","16":"tag-neural-development","17":"tag-neuroscience","18":"tag-new-zealand","19":"tag-newzealand","20":"tag-nz","21":"tag-science-china-press","22":"tag-selective-pruning","23":"tag-spiking-neural-networks","24":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/374103","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=374103"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/374103\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/374104"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=374103"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=374103"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=374103"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}