Artificial intelligence (AI) could help older adults navigate medical care and ease caregiver burdens but only if tools are patient focused and thoroughly tested, researchers said at the Gerontological Society of America (GSA) 2025 Annual Scientific Meeting.
In one session, researchers presented outcomes from a study of whether AI could reliably identify when a patient portal message was written by a caregiver rather than a patient with dementia. The team tested 1973 patient portal messages. The algorithm correctly identified the sender most of the time — area under the curve of 0.92, a level of accuracy described as “highly precise.”
The model had the most trouble when a message referenced more than one person, for example, when a message included “my husband and I.” Clinicians must know who is speaking because patients’ needs and caregivers’ needs often trigger different follow-up actions.
If an AI system misidentifies the sender, the message could be misinterpreted or routed incorrectly, said Kelly T. Gleason, PhD, an associate professor at Johns Hopkins School of Nursing in Baltimore, who led the study.
Clinicians face growing pressure to respond to patient messages, so automation could reduce time spent in the electronic health record after hours, Gleason said.
“AI use in healthcare is highly prevalent, but most tools are launched before being tested by clinicians, patients, or care partners,” Gleason said. “Use of automation could help with shared access registration and caregiver identification — but there is still so much we do not know about AI.”
Gleason’s team also interviewed patients with dementia and their caregivers. Only 5% of the 650 people invited responded, a sign of the challenges in engaging older adults and caregivers in research through digital channels alone. Respondents said they were open to AI-generated draft messages if clinicians reviewed them and healthcare systems were transparent. But many doubted whether AI could detect emotional nuance or urgent needs, Gleason said.
“It is important to make sure automation is done in a way that does not compromise patient trust in healthcare,” Gleason said.
Gleason said her team is planning future trials to learn whether the use of AI tools improves or reduces the quality of care.
During the same session, Nancy L. Schoenborn, MD, an associate professor of medicine at Johns Hopkins University, shared results from her qualitative research on AI in healthcare.
Her team interviewed 49 people involved in healthcare, including patients, clinicians, and executives from insurance companies, investment firms, and tech companies. They asked how these stakeholders decide whether to use, invest in, or develop AI tools.
Older adults prioritized affordability and simple, accessible design. Clinicians said tools must integrate smoothly into electronic health record workflows, whereas investors and developers valued market size and revenue models and overcoming regulatory hurdles that drive up costs.
The competing pressures create “real tension between the priorities of end users and those of developers and investors,” Schoenborn said. “Artificial intelligence offers tremendous potential for improving the health of older adults, but broad adoption remains limited. Cost and usability matter to everyone, but the same words mean very different things to different stakeholders.”
She also noted that many AI tools are built based on what the technology can do rather than what older adults actually need.
“AI health applications should be designed after understanding the target users and the health problem,” said Schoenborn. “Often we see solutions in search of a problem.”
Schoenborn said AI tools currently go through the same long and costly approval processes designed for traditional medical devices at the FDA, and that a different approval pathway is needed.
Schoenborn and Gleason reported having no relevant disclosures.
Lara Salahi is a healthcare journalist based in Boston.