Only a cancer center in the Midwest has developed a radiation treatment for your type of tumor, and it has 14 years of data to show outstanding outcomes.

You ask your doctor about the radiation treatment, but when the hospital AI system processes your case, it recommends the practice of the majority of hospitals — brain surgery. Your doctor points out that even if they refer you to that cancer center in the Midwest, your insurance company’s AI will probably refuse to pay, because it, too, will probably have been aligned to the recommended standard of care, surgery.

Of course, standards of care guide most medical practice today, but many are variously interpreted by thousands of human clinicians. The danger is that as we go from more than half of all medical practitioners using AI to institutions mandating its use, the recommended practices can be transformed into a monolithic AI-enforced standard of care, with little chance for either patients or doctors to appeal.

In a $5 trillion health care system, financial pressure to use AI to influence clinical decisions — for reasons beyond patient benefit — will only intensify. Errors of commission (like undergoing unnecessary testing) and omission (skipping inexpensive prevention in favor of costly treatments later) will likely escalate.

If, instead, we ensure that AI systems are aligned to serve patients first, medical decisions are likely to become safer, more up-to-date with the latest science, and better communicated to patients.

That’s a challenge for the health care system as a whole, but let’s start with you, the patient: How do you ensure that the AI advice you research at home actually serves your health — and not someone else’s bottom line?

First, become a savvy AI patient. Exploit what makes AI different from human doctors: infinite patience and multiple perspectives. Ask the same question from different angles. “What would you recommend if you were a surgeon?” Then ask again as if it were a physical therapist. Add constraints: “What if this treatment means I can’t work?”

Get second opinions from different chatbots. Our research at Harvard Medical School’s department of biomedical informatics shows that Claude, ChatGPT, and Gemini have remarkably different clinical approaches: They consistently disagree on the same cases.

Yes, you might need multiple AI bot subscriptions, but that’s cheaper than most co-pays. And yes, bring any advice you receive and might want to implement to your actual doctor. They may mentally groan, but that ship sailed two decades ago, when patients started printing out Google searches.

The more you practice using AI, the better you will be at spotting AI overconfidence — those moments when the chatbot sounds certain but shouldn’t be.

Second, own your data. Get your hands on your medical records. The 21st Century Cures Act guarantees you access to digital versions of your health data. Some hospitals offer this through patient portals. If your hospital connects to Apple Health (more than 800 US hospitals do), you can download files that chatbots can read directly. Even photos of old records help.

Making sense of this data requires extensive prompting skills. But here’s the bet worth making: If you collect these files in a private folder today, advancing AI will soon organize them efficiently and accurately. That time investment in gathering your data now will pay off as the technology improves.

Another caveat: Only a minority of chatbot companies guarantee they won’t retain or learn from your data. Which brings us to the hardest part: policy.

Congress should consider enacting rules for this growing industry. But it should proceed cautiously. Premature legislation could entrench current market leaders and kill promising alternatives, including open-source, patient-aligned chatbots. We’ve seen this before: The implementation of the federal health privacy law, HIPAA, was streamlined by commercial entities that invested in efficient workflows at scale; patients and researchers requiring only occasional access never got such infrastructure and remained stuck navigating institutional bureaucracy.

The best legislation wouldn’t pick winners or mandate specific medical approaches. Instead, it should require truthful labeling — not of ingredients but of influences. What data was used to train this chatbot? What procedures shaped its clinical reasoning? Which health care stakeholders influenced its development? What happens to the data you submit, and do you have any control over it?

With transparency different chatbots would reflect different values and clinical philosophies, serving diverse patient populations. AI platforms already provide “model cards” laying out their origins and specs. Think of this as an evolution of those cards to describe crucial details relevant to medicine, For example, did health care stakeholders influence the alignment of the model? Does the model prioritize outcomes or costs? Monitoring and enforcement would be crucial.

We’re in a strange moment. At-home AI that could genuinely help patients navigate complex medical decisions is emerging, just as powerful financial interests are taking control of those same tools.

The question isn’t whether AI will transform health care, as it already has begun to do that. The question is whether that transformation serves patients or profits.

Your move. Treat your health data as if it matters, interrogate your AI advisers the way skeptical journalists would question their sources, and demand transparency from the companies building these tools. The alternative is letting a $5 trillion industry decide what’s best for you — one chatbot response at a time.