In an era with less political and cultural division — and less tech-driven diversion — the stunning advances in medicine seen in recent years would get far more attention.

The rapid development of the COVID-19 vaccine in 2020 is one of many examples of how artificial intelligence has been a massive game-changer. AI-powered diagnostic algorithms analyze X-rays, CT scans, MRIs and all sorts of other medical data with a degree of nuance and level of accuracy that’s beyond humans. Breakthroughs appear near that could improve treatment of and/or help prevent diseases such as diabetes, several types of cancer, heart disease, glaucoma and macular degeneration. A single treatment with gene therapy, astonishingly, now can cure hemophilia B. “Living with this disease for 57 years, and then my life changes in 30 minutes,” said Curt Krouse, the first patient to receive the treatment at Penn Medicine’s Blood Disorders Center. “It’s hard to believe.”

But these developments also have the potential to create extraordinary moral and ethical issues. Consider the rapid advances in the ability to detect Alzheimer’s risks. San Diego’s own Eric Topol, the cardiologist who founded and directs the Scripps Research Translational Institute, wrote in April about the excitement in health and scientific circles over “the breakthrough blood test for Alzheimer’s disease” — one that the Mayo Clinic concluded was “over 90% accurate” in gauging the preconditions of future cognitive decline. The test — which was approved by the FDA in May — is likely to keep getting more accurate.

So what happens when there is much higher public awareness of this health tool? Consider what ensued after the identification of the Huntington’s disease gene in 1993 led to a highly accurate genetic test for those at risk of eventually developing the rare condition. Extensive research has shown the fallout on the personal lives and relationships of those who knew Huntington’s was in their future.

Now imagine similar predictive power in diagnosing Alzheimer’s, which affects vastly more Americans. Especially among older individuals, decisions about getting married — or remaining married — could hinge on test results. A partner’s future cognitive risk is as relevant as their past medical history. We may pass laws guaranteeing the privacy of medical records, but we can’t prevent people from asking questions about the test results of potential or existing life partners.

And what happens when lawyers get involved? If someone has taken a predictive Alzheimer’s test and received a high-risk result, does that individual have a legal obligation to disclose it before getting married — or entering a business partnership? At what point would nondisclosure be seen as the kind of omission that could void contracts or create grounds for legal disputes? The issue isn’t hypothetical. With Huntington’s, such disputes emerged over estate planning and the withholding of information from family members.

There’s a chance that AI saves us from this dilemma by helping find a way to prevent Alzheimer’s onset, which Topol believes is a real possibility. But until that happens, we’ll have to deal with a world in which a simple test delivers life-altering information — and forces conversations no one ever expected to have.