The human brain is complex. Artificial intelligence (AI) machine learning and medical imaging data are accelerating breakthroughs in brain health, especially in medical diagnostics. A peer-reviewed study published today in Nature Neuroscience unveils an AI foundation model called BrainIAC (Brain Imaging Adaptive Core) that is capable of predicting brain age, dementia, time-to-stroke, and brain cancer from brain magnetic resonance imaging (MRI).

“We find that BrainIAC consistently outperforms traditional supervised models and transfer learning from more general biomedical imaging models across a wide range of downstream applications on healthy and disease-containing scans with minimal fine-tuning,” wrote corresponding author Benjamin Kann at the Dana-Farber Cancer Institute and Brigham and Women’s Hospital and Harvard Medical School in Boston, along with co-authors Divyanshu Tak, Biniam Garomsa, Anna Zapaishchykova, Tafadzwa Chaunzwa, Juan Carlos Climent Pardo, Zezhong Ye, John Zielke, Yashwanth Ravipati, Suraj Pai, Sri Vajapeyam, Maryam Mahootiha, Mitchell Parker, Luke Pike, Ceilidh Smith, Ariana Familiar, Kevin Liu, Sanjay Prabhu, Omar Arnaout, Pratiti Bandopadhayay, Ali Nabavizadeh, Sabine Mueller, Hugo Aerts, Raymond Huang, and Tina Poussaint.

What sets this study apart is the generalizability of their AI foundation model. The traditional or narrow AI machine learning models are typically for a single-purpose task and are often trained from small training datasets consisting of labeled data. Foundation models, on the other hand, are more flexible, general-purpose, and can perform a wide range of tasks; are pre-trained using self-supervised learning on massive datasets consisting of unlabeled data; and use transfer learning to apply knowledge gained from one task to another. An example of a type of foundation model is a large language model (LLM). Examples of LLMs include OpenAI’s GPT (GPT-5, GPT-4.1, etc.), Google’s Gemini, BERT, PaLM, T5, and XLNet, Meta AI’s Llama, Anthropic’s Claude, and xAI’s Grok, among others. Another type of foundation model is for images and/or multimodal such as OpenAI’s DALL-E, Stability AI’s Stable Diffusion, and Google DeepMind’s Imagen.

Neuroscience has the challenge of sparse brain data to train AI algorithms. The overall performance accuracy of AI machine learning depends on a number of factors, of which the quantity and quality of training data is key.

For their AI foundation model, the researchers used imaging data from 34 datasets that total over 48,900 brain MRI scans across 10 neurological conditions. The conditions include Alzheimer’s disease (10,222 scans), dementia (2,749 scans), stroke (3,641 scans), Parkinson’s disease (547 scans), brain cancer (200 High-Grade Glioma scans, 8,537 Glioblastoma scans, and 990 Diffuse Glioma scans), Pediatric Low-Grade Glioma (5,999 scans), autism spectrum disorder (1,099 scans), and healthy (14,981 scans).

The AI foundation model performs a wide range of tasks, including predicting overall brain cancer survival, dementia, brain age, isocitrate dehydrogenase (IDH) mutations, sequence classification, time-to-stroke, and brain tumor segmentation. Glioblastoma multiforme (GBM) is an aggressive and deadly brain cancer that does not have a cure and is the most common form of malignant brain cancer. Mutations in IDH may help classify GBM, according to a different study published in Science in 2008 by D. Williams Parsons et al.

The scientists benchmarked their AI foundation model against pretrained AI models and found it outperformed other pretrained supervised learning AI models, especially in situations of few-shot, low amounts of data, and when prediction tasks have a high degree of difficulty.

“Our findings demonstrate BrainIAC’s adaptive and generalization capabilities, positioning it as a powerful foundation for development of clinically usable imaging-based deep learning tools, particularly in limited data scenarios,” wrote the scientists.

According to the research team, they have created the largest pretrained brain MRI AI foundation model, one that is more flexible than the traditional, narrow brain MRI AI models trained via supervised learning. As next steps, the team plans to explore incorporating omics and clinical data to make their AI foundation model multimodal, as well as to enhance performance.

“BrainIAC can be integrated into imaging pipelines and multimodal frameworks and may lead to improved biomarker discovery and artificial intelligence clinical translation,” the researchers concluded.

Copyright © 2026 Cami Rosso. All rights reserved.