Opinion
As patients turn to AI for advice, GPs can help them use it well, while advocating for the guardrails they need, says Dr Janice Tan.
Around 230 million people already ask health questions on ChatGPT every week.
Last week, OpenAI launched ChatGPT Health – a platform where patients can upload their medical records and get AI-generated health information.
Â
This week, Anthropic has introduced Claude for Healthcare, with integrations allowing individuals to connect their health records, lab results, and fitness data.
Â
Around 230 million people already ask health questions on ChatGPT every week.
Â
This isn’t some distant future scenario. This is happening now, in our consulting rooms.
Â
The reaction among my GP colleagues has been mixed. Some see genuine possibility. Others worry about misinformation, liability, what happens to the doctor-patient relationship when patients arrive already informed.
Â
I get it. We’re all exhausted, the system is barely holding together, and now there’s one more thing changing.
Â
But I think we’re asking the wrong questions.
Â
Where we actually are
The RACGP’s Health of the Nation 2025 report paints a stark picture of Australian general practice.
Â
It reveals that 68% of GPs identify the increasing complexity of patient presentations as the greatest challenge facing the profession.
Â
It also shows 71% of GPs nominated mental health reasons as a top reason for patient presentations, up 10% since the survey began in 2017.
Â
And the new My Health Record sharing by default legislation means pathology and diagnostic imaging reports will now be automatically uploaded to patients’ records, meaning patients can access their results before we’ve even had a chance to review them.
Â
When patients are stuck waiting days or weeks for an appointment, what are they supposed to do about their worsening symptoms?
Â
What about that abnormal-looking pathology report sitting in their My Health Record? Do they just sit there anxiously waiting?
Â
One of my favourite consults was with an AI-informed patient
A patient came in last month having already accessed her knee MRI report through My Health Record and discussed it with ChatGPT.
Â
She walked in with specific questions about her meniscal tear, the relationship between her symptoms and the imaging findings, whether she needed surgery or could try conservative management first.
Â
We had an actual conversation – two people working through a problem together. We used our limited time for what actually mattered.
Â
There was clinical reasoning, there was shared decision making, and we addressed her specific concerns about returning to the sport she loved.
Â
That consultation was one of the best I’d had all week.
Â
The mistakes we all make
It’s important to recognise that both AI and clinicians get things wrong.
Â
AI can lack context, oversimplify, miss the subtle cues that come from years of pattern recognition.
Â
But we miss diagnoses too – we’re influenced by cognitive biases, by how tired we are, by the 20 patients we’ve already seen that morning.
Â
When patients started arriving with Google searches years ago, we adapted, and AI is simply the next iteration.
Â
They are more sophisticated, yes, but the fundamental dynamic is the same: patients trying to understand their own bodies using whatever tools are available.
Â
Perhaps the real innovation here isn’t the technology. The innovation is the shift in what we think the clinician-patient relationship should be.
Â
What the resistance is really about
When colleagues express concern about patients using AI, I hear exhaustion more than anything else.
Â
How relentlessly everything is changing. The funding model. The workforce crisis. The regulatory environment. The expectations of what we can deliver with the resources we have.
Â
We’re all tired.
Â
So, when something else changes such as patients start using AI to understand their health before they see us, it feels like one more thing to manage rather than a potential help.
Â
But here’s what I keep coming back to: anything that helps patients understand their conditions, ask better questions, and engage more meaningfully in their care makes my job better.
Â
The consultations where patients arrive informed are often the most satisfying.
Â
The real pitfalls we need to watch for
Let’s be clear about where AI can genuinely cause harm.
Â
An algorithm can’t feel the subtle fullness of an abdomen that makes you think twice about that ‘simple’ gastro presentation.
Â
It can’t pick up on the hesitation in someone’s voice when they say they’re fine that tells you they’re absolutely not fine.
Â
It can’t recognise when chest pain sounds cardiac despite a normal ECG, or when someone’s declining function matters more than any single test result.
Â
AI also can’t account for the social context that changes everything: the patient who can’t afford the medication you’re discussing, the family dynamics that make a treatment plan unworkable, the health literacy that’s lower than it appears, the cultural considerations that shape how someone experiences and reports symptoms.
Â
There are real concerns about data privacy, about who owns the information patients upload, about how that data might be used.
Â
There’s the risk of patients misinterpreting information, of AI reinforcing health anxiety, of people delaying necessary care because an algorithm told them to wait and see.
Â
These pitfalls matter. They deserve serious attention and thoughtful safeguards.
Â
But they’re also why we need to be involved in shaping how patients use these tools, rather than standing on the sidelines saying they shouldn’t.
Â
To my fellow GPs
I’m choosing to see this an opportunity for us. This is our moment to lead.
Â
We can be at the table shaping how AI integrates into patient care, teaching patients how to use these tools thoughtfully – which questions to ask, how to interpret what they’re reading, when AI’s limitations mean they need human expertise.
Â
We could use the health literacy patients gain from AI to skip the basics and get straight to the complex clinical reasoning that actually requires our training.
Â
The technology is here. Patients are using it.
Â
We can help them use it well, advocate for the guardrails they need, and finally build the partnership model of care we’ve been talking about for years.
Â
Or we can spend our energy mourning a paternalistic model that should have ended decades ago.
Â
The future is here whether we’re ready or not.
Â
I’d rather help build it than watch it happen to us.
Â
About the author
Dr Janice Tan is a Sydney GP who is passionate about innovation in primary care. She’s General Manager of Clinical Innovation at Bupa and contributes to the RACGP’s Expert Committee for Practice Management and Technology and the Specific Interest Group for Digital Health and Innovation. All views expressed here are her own.
Â
Log in below to join the conversation.
AI artificial intelligence ChatGPT
newsGP weekly poll
Research has found most people return to their original weight after stopping weight-loss medication; have you seen this among your patients?