{"id":301215,"date":"2025-11-22T10:04:24","date_gmt":"2025-11-22T10:04:24","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/301215\/"},"modified":"2025-11-22T10:04:24","modified_gmt":"2025-11-22T10:04:24","slug":"will-ai-replace-doctors-inside-story","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/301215\/","title":{"rendered":"Will AI\u00a0replace doctors? \u2022 Inside Story"},"content":{"rendered":"<p>Throughout my career as a general practitioner and therapist, the doctor\u2013patient relationship has been a source of both anxiety and immense satisfaction. And when I forayed into fiction writing, my first novel explored this very thing, albeit from the point of view of a doctor in psychological distress.<\/p>\n<p>Over the years I\u2019ve pursued further psychological training with the aim of improving my communication with and counselling of patients, and I now share what I\u2019ve learned with medical students in my role as a facilitator in a Melbourne medical school. I\u2019ve always been cognisant of the incredible privilege of my work. The word \u201cprivilege\u201d can be a double-edged sword: I use it here to mean a sense of gratitude to patients for the trust they\u2019ve placed in me. I\u2019ve never thought of my medical degree as conferring privilege in the more negative sense.<\/p>\n<p>I tell you all this because I feel the need to be straight with readers about my personal views about being a doctor. It\u2019s in the context of a career focused on fostering better relationships with patients that I read Charlotte Blease\u2019s Dr Bot: Why Doctors Can Fail Us and How AI Could Save Lives. It\u2019s in this same context that I reflect on its content.<\/p>\n<p>I\u2019m not at all opposed to a book that explores the possibilities of AI in medicine. What I\u2019m struggling with is the general tenor of this one. In fact, the purported premise of Dr Bot \u2014 that AI has the potential to address all the deficiencies of the current healthcare system \u2014 seems to me to be simply a smokescreen. The real agenda here, as I read it, is to demonstrate the inadequacies, not of the healthcare system in general, but of the medical profession in particular. In chapters with titles such as The Ailing Appointment, Doctor Deference, The Dark Art of Medicine \u2014 why it\u2019s dark is left unexplained \u2014 and Humanizing Healthcare Without Doctors, Blease builds to her startling conclusion, that patients might be happier and healthier if doctors were sacked and AI put in charge.<\/p>\n<p>I\u2019m mindful that Blease, a health informaticist, would probably view my criticisms as an entirely predictable attempt to protect medical practice for my own benefit and that of my cronies. \u201cInstitutions will try to preserve the problem to which they are the solution,\u201d Blease writes, citing the apparently well-known \u201cShirky Principle.\u201d She continues: \u201cIn the case of medicine, I argue that the situation is even worse. Faced with increasingly broken health systems, the profession is failing to constructively reimagine and work towards credible solutions.\u201d<\/p>\n<p>Doctors, according to Blease, are fallible and arrogant. They hold elitist \u201cluxury beliefs\u201d about their capabilities. Worst of all, their clinical decisions are governed by their \u201cStone Age\u201d brains, in which deepseated and immutable prejudices lurk. The solution to the problem of doctors, according to Blease, is not that the profession take steps to tackle its shortcomings but that society dispense with them entirely or, at the most, retain a select few as \u201cthe single figurehead of a healer,\u201d whatever that might mean. Dr Bot almost reads like the case AI might make for itself as a replacement healthcare provider.<\/p>\n<p>Blease says that doctors must shoulder the blame for health system blindspots and failures, at least in Britain and the United States, from where her observations come. I find it surprising that she doesn\u2019t extend her criticism to big gaps in US health insurance coverage, or the profit-driven and exceedingly powerful US health insurance corporations that dictate who gets covered and for what. Nor does she touch on Britain\u2019s beleaguered National Health Service, where waiting times for hospital appointments are now measured in years. These are national, systemic imbalances, not ones born of medical malpractice.<\/p>\n<p>Blease writes at length about the biases that arise from gender, race and socioeconomic mismatches between doctor and patient. She argues that AI-delivered healthcare will help erase such biases which, in her eyes, are wholly attributable to human \u2014 that is, doctor \u2014 error. Such an argument exposes a fundamental flaw in Blease\u2019s thesis: she clings to the idea that medicine is still the patriarchal profession of fifty years ago.<\/p>\n<p>Doctors\u2019 awareness of the dynamics of gender, race, sexuality and class \u2014 as well as other potential biases around patients\u2019 lifestyle choices \u2014 has risen dramatically in recent years. Medical students in Australia, at least, are now taught self-reflective practice alongside ethics and communication skills. A morning spent with the students I teach \u2014 committed young men and women of very diverse backgrounds \u2014 would convince anyone that our future doctors are awake to the traps of privilege and prejudice.<\/p>\n<p>I also disagree with Blease\u2019s assertion that doctors\u2019 Type 1 (fast) thinking style \u2014 the rapid decision-making required in an emergency \u2014 blinds them to their biases. Type 2 thinking is the mainstay for the many doctors working in mental health, to say nothing of the reflective tasks, debrief sessions and case reviews that every clinician undertakes as part of their ongoing education.<\/p>\n<p>Strangely, one might say perversely, Blease argues against empathy in doctors, citing research to support her view that empathy tends to get in the way of rational clinical decision-making. This argument smacks of post-hoc justification: so doctors need to be replaced by computers because, like their patients, they\u2019re all too human? Isn\u2019t human empathy a motivator to act in the interests of the patient? Blease is also selective: a quick internet (non-AI-mediated) search brings up many studies revealing the benefits of doctors\u2019 empathy for patients\u2019 satisfaction, adherence to treatment and clinical outcomes. But Blease\u2019s doctors are damned either way: she also asserts that all too often they display the wrong sort of empathy.<\/p>\n<p>AI will inevitably play a role in medicine, as it will in many fields of work. It\u2019s already arrived in the form of scribe programs that produce consultation notes and chatbots that condense the time taken for literature reviews from days to minutes. Some of you will have already had to decide whether to consent to your GP using these tools during your consultation.<\/p>\n<p>Australian doctors are embracing these innovations as an extremely effective way of reducing the burden of administration, and are hopeful that, in our increasingly stretched healthcare system, the assistance AI provides will afford them more quality time with their patients. Blease sees doctors as technically illiterate and change resistant, yet very few doctors working in this country don\u2019t use medical technology on a daily basis. And our junior medical workforce \u2014 those doctors who staff our public hospitals twenty-four hours a day \u2014 have never known a world without the internet. What exactly are these AI innovations that doctors refuse to engage with? Who are these medical Luddites?<\/p>\n<p>Some time ago for Inside Story I <a href=\"https:\/\/insidestory.org.au\/quo-vadis-doctor\/\" rel=\"nofollow noopener\" target=\"_blank\">reviewed<\/a> The Doctor Who Wasn\u2019t There, a history of electronic media in health and medicine by the US physician Jeremy Greene. Over the past century, Greene explains, a series of new technologies have promised to democratise access to healthcare. From the invention of the humble telephone to the introduction of telemedicine, initial enthusiasm or scepticism faded over time as the new became commonplace, and yet the promised democratisation of healthcare remained as out of reach as ever, at least in the United States.<\/p>\n<p>AI might run the same course, despite Blease\u2019s claims to the contrary. She champions bots for their superior diagnostic skills, the clarity of health information they provide, and their assistance in helping patients talk more assertively to their doctors. While making brief mention of the possibility that Big Tech could put any or all of their AI programs behind paywalls, she paints AI as the great hope for a more accessible and equitable US healthcare system. But she makes no mention of nor offers any alternative to the skills needed outside diagnosis and patient communication that bots don\u2019t currently provide \u2014 the skills required for physical examinations, surgery, <a href=\"https:\/\/insidestory.org.au\/why-do-we-still-have-so-many-radiologists\" rel=\"nofollow noopener\" target=\"_blank\">radiology<\/a>, anaesthesia and other interventions.<\/p>\n<p>It\u2019s only in the last few pages of Dr Bot that Blease turns her attention from the myriad defects of the medical profession to touch \u2014 albeit briefly \u2014 on some very concerning aspects of AI: inequitable access to AI-assisted medical care, surveillance, data breaches (a particularly vexed issue when it comes to sensitive medical information) and AI\u2019s staggering consumption of energy and water. Blease states in her introduction that these \u201cpressing national and global concerns\u201d are largely outside the scope of her book \u2014 a convenient stance \u2014 yet she\u2019s still content to conclude that AI might replace the medical profession and do a better job of healthcare.<\/p>\n<p>Surely the horrifying fact that each ChatGPT search uses around 500ml of water is as important to consider for our wellbeing as a species as the sometimes brusque bedside manner of one\u2019s surgeon. I\u2019m not excusing the surgeon \u2014 except to say that her brusqueness might have something to do with being up all night repairing a ruptured aorta \u2014 but surely a broader perspective is critical before recommendations about the net value of AI in healthcare can be confidently made.<\/p>\n<p>Who are the potential readers of this book? Doctors aren\u2019t likely to want to wade through this castigating text with its sometimes laboured metaphors \u2014 \u201cthe messy entrails of the patient\u2013doctor appointment,\u201d for example \u2014 to discover the author\u2019s conclusion that, in the long term, the traditional medical profession might well be wholly replaced with physician assistants, nurse practitioners, medical knowledge engineers, telemedicine developers and medical data scientists. Wait a minute: so physician assistants and nurse practitioners can stay? Aren\u2019t they clinicians, too, every bit as prone to bias and \u201cStone Age\u201d thinking as doctors? (I should add that Blease includes \u201cjunior\u201d physicians in this acceptable line-up, without mention of their fate when they \u201cgrow up.\u201d)<\/p>\n<p>Is Blease writing with the interests of patients in mind? She staunchly claims she is, and I applaud her intention. Perhaps Dr Bot will encourage readers to consult bots to make sense of their symptoms, learn more about their prescribed medication and decipher medical jargon. Empowering patients is a good thing, but empowerment doesn\u2019t necessarily blossom from the seeds of physician mistrust.<\/p>\n<p>In a recent article for the New Yorker titled \u201cIf AI Can Diagnose Patients, What Are Doctors For?\u201d physician Dhruv Khullar writes about a bot that can diagnose complex and rare diseases with similar accuracy to the most experienced physicians. This bot and others like it promise better outcomes for patients by giving easier \u2014 perhaps even more equitable \u2014 access to prompt diagnoses and appropriate care.<\/p>\n<p>But there\u2019s danger in relying too heavily on AI, Khullar writes. For one thing, AI still gets it wrong. And doctors run the risk of \u201ccognitive deskilling\u201d: losing their diagnostic knowledge and skills due to lack of use. As AI gets more integrated into routine medical practice, this deskilling is something to guard against, especially when the lights go out and the bots shut down, as is increasingly likely to happen as our climate grows ever more extreme.<\/p>\n<p>Khullar\u2019s article argues for a working relationship of AI and doctor for the benefit of patients. He suggests that doctors see AI as a means of exploration of a clinical dilemma, a place to start rather than end. \u201cAt their best,\u201d he writes, \u201cthey would steer you through \u2014 not away from \u2014 the medical system.\u201d<\/p>\n<p>I\u2019ve never been an out-and-out apologist for the medical profession. I\u2019ve worked with my fair share of bullying senior medical staff. On occasions I\u2019ve seen my colleagues display hubris, insensitivity and lack of empathy. I know we\u2019ve all made mistakes in diagnosis and management, mistakes that sometimes haunt us for the rest of our careers.<\/p>\n<p>But I can\u2019t accept that the way doctors work today can be considered obsolete, especially as a whole generation of young doctors are bringing to the profession an improved awareness of the socioeconomic determinants of health, sound skills in communication and teamwork, and a keen interest in using AI to maximise patient care. Blease has made her case. I owe it to all the conscientious and caring doctors I know to dispute it. \u2022<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.readings.com.au\/product\/9780300247145\/dr-bot--charlotte-blease--2025--9780300247145#rac:dt4xiswhx65g\" rel=\"nofollow noopener\">Dr Bot: Why Doctors Can Fail Us and How AI Could Save Lives<\/a><br \/>By Charlotte Blease | Yale University Press | $39.95 | 352 pages<\/p>\n","protected":false},"excerpt":{"rendered":"Throughout my career as a general practitioner and therapist, the doctor\u2013patient relationship has been a source of both&hellip;\n","protected":false},"author":2,"featured_media":301216,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,457,500,1679,105],"class_list":{"0":"post-301215","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-books","14":"tag-healthcare","15":"tag-medicine","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/301215","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=301215"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/301215\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/301216"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=301215"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=301215"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=301215"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}