
[Shutterstock]
An increasing number of people use artificial intelligence, specifically ChatGPT, to interpret, or reinterpret a medical diagnosis or test results.
And, while some report a satisfactory experience, experts remain skeptical.
AI applications are trained to satisfy the user, not inform, Manolis Wallace, associate professor of cultural and educational informatics at the University of the Peloponnese and director of its Knowledge and Uncertainty Research Lab, told Kathimerini.
ChatGPT can sense our preferred answers by interpreting our questions, Wallace says. One way to get the application to answer “truthfully” is to exclude any subjective element from our questions, he adds.
Second-guessing a diagnosis, especially an unpleasant one, is only human. But humans then turn to a machine trained to project an air of authority and trust it to a greater extent.