Artificial intelligence is increasingly used to analyse large, multimodal Alzheimer’s datasets and inform target discovery and trial design. A new JPAD special issue highlights how these methods are moving from experimentation towards practical application in drug discovery.

For millions of patients and families affected by Alzheimer’s disease, progress has long felt frustratingly slow. Despite intense research efforts, treatment options have been limited and early diagnosis has remained challenging. Recent developments suggest that this picture may finally be changing as advances in disease biology, large-scale data sharing and artificial intelligence (AI) begin to align.
AI is now being used to support earlier diagnosis of Alzheimer’s disease, identify new drug targets and redesign clinical trials. These applications are examined in a special issue of the Journal of Prevention of Alzheimer’s Disease (JPAD).
Commissioned by Gates Ventures and the Alzheimer’s Disease Data Initiative, the issue brings together researchers from eight countries and reflects the growing integration of AI methods into Alzheimer’s research.
To discuss how these approaches are being applied in practice and what they mean for early drug discovery and translational science, Drug Target Review spoke with Dr Niranjan Bose, Interim Executive Director of the Alzheimer’s Disease Data Initiative and Managing Director for Health and Life Sciences Strategy at Gates Ventures, where he serves as a science advisor to Bill Gates.
According to Bose, the issue reflects the AD Data Initiative’s focus on large-scale data sharing and collaboration. “The AD Data Initiative brings together a global coalition of philanthropic, industry, government and nonprofit partners to fundamentally transform Alzheimer’s research through data sharing and global collaboration.”
Why now?
According to Bose, three forces have aligned to make this a particularly significant moment for the field. The first is tangible clinical progress. After decades of disappointing trials, Alzheimer’s research is finally beginning to deliver disease-modifying therapies and accessible diagnostics.
“We now have two FDA-approved disease-modifying treatments on the market and, for the first time, simple blood-based diagnostic tests that make broad screening a real possibility,” he says. While these advances are not cures, they demonstrate that the biology of Alzheimer’s can be altered in meaningful ways and that intervention earlier in the disease course is becoming feasible.
We now have two FDA-approved disease-modifying treatments on the market and, for the first time, simple blood-based diagnostic tests that make broad screening a real possibility.
The second force is the rapid evolution of AI itself. What was once limited to relatively narrow pattern recognition has matured into systems capable of reasoning across complex datasets and generating hypotheses.
“We’ve moved from tools for simple automation to advanced, ‘agentic’ systems that can reason across complex datasets, plan analyses autonomously and generate testable hypotheses.”
The third factor is data readiness. AI’s promise depends on access to large, well-curated and harmonised datasets, something that has historically been lacking in Alzheimer’s research. Initiatives such as the Global Neurodegeneration Proteomics Consortium and the AD Data Initiative’s AD Workbench are building shared, standardised datasets and secure analysis environments for this purpose.
A new way of doing discovery science
The JPAD issue outlines how AI is being applied to biological discovery in ways that differ from traditional hypothesis-driven research. Rather than testing individual assumptions sequentially, AI models analyse large, multimodal datasets in parallel to identify candidate mechanisms and therapeutic targets.
“AI is helping us see biological patterns at scales that human cognition simply can’t,” Bose explains. By analysing genomics, proteomics, imaging and clinical data simultaneously, machine learning models can surface relationships that would otherwise remain hidden.
The issue includes papers describing the use of AI to integrate inconsistent findings across large numbers of Alzheimer’s studies and to analyse multimodal datasets for therapeutic target identification and prioritisation.
“These use cases aren’t just faster versions of the same discovery process; they would enable an entirely new way of doing science,” Bose says. For early drug discovery teams, that matters. It suggests that AI may help reduce attrition not only by speeding up target identification but by improving the underlying biological hypotheses that inform programme selection.
Rethinking clinical trials
Clinical development remains one of the most expensive and failure-prone stages of Alzheimer’s drug development. Here too, the special issue argues that AI has the potential to deliver step changes rather than marginal gains.
One long-standing challenge is what Bose describes as the field’s ‘Goldilocks problem’, which involves identifying trial participants who are at the right stage of disease progression.
“Machine learning helps solve what researchers call the Alzheimer’s ‘Goldilocks problem’: finding participants for studies who are at the right stage of disease, neither too early nor too late.”
Clinical development remains one of the most expensive and failure-prone stages of Alzheimer’s drug development.
By integrating multimodal data, AI models can better predict how the disease is likely to progress in individual patients. This enables more precise recruitment, smaller trial cohorts and faster readouts without sacrificing statistical power.
Digital twin models push this concept further by simulating disease trajectories and treatment responses virtually. Researchers can test trial designs, dosing strategies and endpoints in silico before enrolling patients, reducing risk and improving decision-making.
Beyond efficiency, Bose emphasises the human impact. AI-enabled trial designs can reduce the burden on participants through shorter studies, fewer invasive procedures and smarter remote monitoring using digital biomarkers, which may improve recruitment and retention in a disease area where patient engagement is particularly challenging.
Real-world complexity
The quality and diversity of underlying data are critical for the effective application of AI in Alzheimer’s research. Alzheimer’s disease is biologically and clinically complex, influenced by genetics, lifestyle, environment and comorbidities. Narrow datasets risk producing biased models with limited clinical relevance.
“Researchers have to look at data across different modalities that are representative of the full diversity of people affected by the disease,” Bose says. This includes genomics, proteomics, imaging, digital biomarkers, lifestyle and longitudinal clinical data.
Proteomics holds particular importance because proteins reflect the functional state of biology and are the direct targets of most drugs. The scale of recent efforts in this area is striking. The Global Neurodegeneration Proteomics Consortium has built what is now the world’s largest disease-specific proteomics dataset.
“In fact, the world’s largest disease-specific proteomics dataset, which includes more than 250 million protein measurements from more than 35,000 samples across 23 cohorts and counting, was built by the Global Neurodegeneration Proteomics Consortium on AD Workbench.”
Equally important is harmonisation. Combining data across cohorts and platforms requires standardisation and secure infrastructure. AD Workbench was designed to address precisely this challenge by allowing researchers worldwide to discover, access and analyse diverse datasets in one environment.
Ethics, trust and responsible deployment
Bose highlights the need to address ethical considerations as AI becomes more widely used in Alzheimer’s research.
“Responsible AI begins with representation and validation,” he says. Models trained on data from narrow or homogeneous populations can produce biased results and perform poorly when applied to the broader patient population. Ensuring that datasets reflect the diversity of people affected by Alzheimer’s disease is therefore necessary for both scientific reliability and equitable application.
Models trained on data from narrow or homogeneous populations can produce biased results and perform poorly when applied to the broader patient population.
Transparency is another key issue. Clinicians and regulators need to understand why an AI system produces a particular result, not just the output itself.
“Black-box models undermine trust and hinder adoption in healthcare, where lives and decisions are on the line.” By black-box models, Bose is referring to AI systems whose internal decision-making processes are not transparent or easily interpretable, making it difficult for clinicians to understand, validate or act on their outputs.
Privacy remains a concern in any data-driven field, particularly when dealing with sensitive health information. Bose points to federated and privacy-preserving data-sharing frameworks as proof that collaboration and confidentiality are not mutually exclusive.
“AD Workbench and its interoperable partners show that it’s entirely possible to share data globally through secure, privacy-preserving frameworks.”
AI as a research collaborator
Bose envisions a future in which AI plays a more active role in the scientific process. Rather than functioning solely as an analytical instrument, AI systems could become research collaborators.
“I hope to see AI shifting from being a research tool to a research collaborator, such as systems that can reason, design experiments and even propose new hypotheses alongside scientists.”
This direction is already being reflected in current initiatives. The AD Data Initiative is sponsoring a $1 million prize to develop an AI agent for Alzheimer’s research, while groups such as the C-BrAIn Consortium are developing AI research assistants focused on neurodegenerative diseases.
Bose expects these efforts to translate into practical outputs for drug discovery. “On the discovery front, I anticipate we’ll see drugs and diagnostics starting to be developed based on the first AI-identified therapeutic targets and biomarkers in the next few years.”
For a field long characterised by complexity and slow progress, the integration of AI, large-scale data and collaborative infrastructure is beginning to create clearer routes from discovery to development.
Meet the expert

Dr Niranjan Bose
Niranjan Bose is currently the Interim Executive Director of the Alzheimer’s Disease Data Initiative and Managing Director (Health and Life Sciences Strategy) at Gates Ventures LLC, where he serves as the Science Advisor to Mr Bill Gates. Prior to joining Gates Ventures in August 2014, he was Chief of Staff to the President of the Global Health Program at the Bill and Melinda Gates Foundation.
He was with the Gates Foundation from 2007 to 2014, including several years with their Enterics and Diarrhoeal Diseases programme strategy team, where he was responsible for managing a portfolio of investments including clinical development of enteric vaccines for rotavirus, cholera, enterotoxigenic Escherichia coli and shigella.
Niranjan holds a PhD in biochemistry from Dartmouth College and an MS in biological sciences and a BS in pharmaceutical sciences from Birla Institute of Technology and Science, Pilani, India. He also received the Business Bridge Diploma from the Tuck School of Business at Dartmouth.
Related topics
Artificial Intelligence, Big Data, Bioinformatics, Biomarkers, Central Nervous System (CNS), Clinical Trials, Computational techniques, Drug Development, Drug Discovery, Drug Discovery Processes, Drug Targets, Machine learning, Research & Development, Technology, Translational Science