{"id":238931,"date":"2025-10-20T16:08:12","date_gmt":"2025-10-20T16:08:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/238931\/"},"modified":"2025-10-20T16:08:12","modified_gmt":"2025-10-20T16:08:12","slug":"should-an-ai-copy-of-you-help-decide-if-you-live-or-die","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/238931\/","title":{"rendered":"Should an AI copy of you help decide if you live or die?"},"content":{"rendered":"<p>\u201cIt would combine demographic and clinical variables, documented advance-care-planning data, patient-recorded values and goals, and contextual information about specific decisions,\u201d he said.<\/p>\n<p>\u201cIncluding textual and conversational data could further increase a model\u2019s ability to learn why preferences arise and change, not just what a patient\u2019s preference was at a single point in time,\u201d Starke said.<\/p>\n<p>Ahmad suggested that future research could focus on validating fairness frameworks in clinical trials, evaluating moral trade-offs through simulations, and exploring how cross-cultural bioethics can be combined with AI designs.<\/p>\n<p>Only then might AI surrogates be ready to be deployed, but only as \u201cdecision aids,\u201d Ahmad wrote. Any \u201ccontested outputs\u201d should automatically \u201ctrigger [an] ethics review,\u201d Ahmad wrote, concluding that \u201cthe fairest AI surrogate is one that invites conversation, admits doubt, and leaves room for care.\u201d<\/p>\n<p>\u201cAI will not absolve us\u201d<\/p>\n<p>Ahmad is hoping to test his conceptual models at various UW sites over the next five years, which would offer \u201csome way to quantify how good this technology is,\u201d he said.<\/p>\n<p>\u201cAfter that, I think there\u2019s a collective decision regarding how as a society we decide to integrate or not integrate something like this,\u201d Ahmad said.<\/p>\n<p>In his paper, he warned against chatbot AI surrogates that could be interpreted as a simulation of the patient, predicting that future models may even speak in patients\u2019 voices and suggesting that the \u201ccomfort and familiarity\u201d of such tools might blur \u201cthe boundary between assistance and emotional manipulation.\u201d<\/p>\n<p>Starke agreed that more research and \u201cricher conversations\u201d between patients and doctors are needed.<\/p>\n<p>\u201cWe should be cautious not to apply AI indiscriminately as a solution in search of a problem,\u201d Starke said. \u201cAI will not absolve us from making difficult ethical decisions, especially decisions concerning life and death.\u201d<\/p>\n<p>Truog, the bioethics expert, told Ars he \u201ccould imagine that AI could\u201d one day \u201cprovide a surrogate decision maker with some interesting information, and it would be helpful.\u201d<\/p>\n<p>But a \u201cproblem with all of these pathways\u2026 is that they frame the decision of whether to perform CPR as a binary choice, regardless of context or the circumstances of the cardiac arrest,\u201d Truog\u2019s editorial said. \u201cIn the real world, the answer to the question of whether the patient would want to have CPR\u201d when they\u2019ve lost consciousness, \u201cin almost all cases,\u201d is \u201cit depends.\u201d<\/p>\n<p>When Truog thinks about the kinds of situations he could end up in, he knows he wouldn\u2019t just be considering his own values, health, and quality of life. His choice \u201cmight depend on what my children thought\u201d or \u201cwhat the financial consequences would be on the details of what my prognosis would be,\u201d he told Ars.<\/p>\n<p>\u201cI would want my wife or another person that knew me well to be making those decisions,\u201d Truog said. \u201cI wouldn\u2019t want somebody to say, \u2018Well, here\u2019s what AI told us about it.&#8217;\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"\u201cIt would combine demographic and clinical variables, documented advance-care-planning data, patient-recorded values and goals, and contextual information about&hellip;\n","protected":false},"author":2,"featured_media":238932,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-238931","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/238931","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=238931"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/238931\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/238932"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=238931"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=238931"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=238931"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}