On Facebook, Instagram, TikTok, and other social media platforms, highly respected doctors are endorsing a wide variety of medical products — and racking up millions of views in the process.

There’s just one problem. The videos aren’t authentic. They’re the work of scammers, who use artificial intelligence to clone the faces and voices of real, credentialed physicians without their knowledge or consent. The scammers then deploy these deepfake doctors to endorse snake-oil supplements and counterfeit weight loss pills.

Unless regulators crack down on these deepfakes, the videos threaten to steer even more patients toward wasteful, potentially dangerous purchases — while further undermining the public’s already faltering trust in medical institutions.

A recent New York Times investigation documented the scale of the problem. The Times interviewed several respected physicians who had discovered AI-generated videos of themselves promoting fake “GLP-1 alternatives,” miracle weight loss pearls, and other products they’d never actually recommend. In other cases highlighted by TODAY, AI impersonators hawked treatments that were scientifically impossible.

The problem extends well beyond the weight loss and supplement industry. In an article published by The BMJ, one physician described discovering that their name and likeness were being used without consent to promote supposed “cures” for conditions like high blood pressure and diabetes.

We tested the government’s official new AI nutrition tool: Grok

The imitators are increasingly convincing, especially as AI image and video generators get better by the day. One doctor’s own mother fell for a fake video featuring her daughter. Many scammers deliberately target older patients, who already have a harder time distinguishing authentic content from AI-generated fabrications.

And there’s no clear way for doctors or victims to stop this fraud. The websites behind the fake products — typically based overseas — frequently disappear and reappear under new names, rendering individual reports and lawsuits ineffective. Doctors can’t even counter the false advertisements with legitimate videos of their own, since doing so just gives scammers more material with which to clone their likenesses.

For patients, the dangers of these scams are obvious. Purchasing fake treatments from malicious actors is, at best, a waste of time and money. At worst, it can cause patients to delay proper diagnosis and treatment — and lead them toward potentially hazardous substances.

Medical science is still grappling with a fragile trust environment. In the years since the Covid-19 pandemic, public confidence in U.S. health and science institutions has declined, according to recent surveys. That erosion of trust has real-world consequences. Vaccination rates for influenza, measles, and other routine childhood immunizations have fallen nationwide, raising concerns among clinicians about the return of preventable diseases.

Against this backdrop, deepfake doctor scams are especially dangerous. They don’t just threaten individual patients; they also deepen a broader credibility crisis at a moment when public health depends on trust more than ever. By threatening the credibility of medical professionals and institutions, these scams weaken our ability to keep people healthy and make it more likely that preventable conditions and diseases spread unchecked.

Nobody — least of all physicians who’ve dedicated their lives to helping patients — wants that to happen. We are doing all we can to sound the alarm and give patients clear guidance: Patients should be skeptical of videos that promise miracle cures, especially those that aren’t sold through a pharmacy or prescribed by your doctor. When in doubt, they should contact their doctor’s office or ask questions during an in-person visit.

But ultimately, this is not a problem physicians or patients can solve alone. Federal and state regulators must work alongside doctors to enact safeguards to protect patients and ensure integrity.

The AI threat to public health no one is thinking about: a fake bioterrorist attack

We must begin by recognizing physician identity as a protected professional asset, and by prohibiting its use in any AI-generated, synthesized, or manipulated content without explicit consent. We also need to mandate transparency to identify AI-generated media and content as such, and pour more resources into raising patients’ awareness of scams, deterring medical impersonation, and holding bad actors accountable.

Even with stronger regulation, the platforms themselves should also step up. Sites like Facebook, Instagram, TikTok, and YouTube should commit to stronger enforcement against impersonation, faster takedowns when fraud is reported, and clearer labeling of AI-generated content so users understand when what they’re seeing isn’t real.

Patients deserve to trust that medical advice comes from a real, qualified professional acting in their best interest. Federal action is necessary to protect both patients and doctors — and prevent new technologies from eroding the time-tested foundations of medical care.

John Whyte, M.D., M.P.H., is CEO of the American Medical Association.