{"id":244250,"date":"2025-10-27T14:19:33","date_gmt":"2025-10-27T14:19:33","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/244250\/"},"modified":"2025-10-27T14:19:33","modified_gmt":"2025-10-27T14:19:33","slug":"the-hardest-part-of-creating-conscious-ai-might-be-convincing-ourselves-its-real","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/244250\/","title":{"rendered":"The hardest part of creating conscious AI might be convincing ourselves it\u2019s real"},"content":{"rendered":"<p>As far back as 1980, the American philosopher <a href=\"https:\/\/www.cambridge.org\/core\/journals\/behavioral-and-brain-sciences\/article\/abs\/minds-brains-and-programs\/DC644B47A4299C637C89772FACC2706A\" rel=\"nofollow noopener\" target=\"_blank\">John Searle distinguished between strong and weak AI<\/a>. Weak AIs are merely useful machines or programs that help us solve problems, whereas strong AIs would have genuine intelligence. A strong AI would be conscious.<\/p>\n<p>Searle was sceptical of the very possibility of strong AI, but not everyone shares his pessimism. Most optimistic are those who endorse <a href=\"https:\/\/plato.stanford.edu\/entries\/functionalism\/\" rel=\"nofollow noopener\" target=\"_blank\">functionalism<\/a>, a popular theory of mind that takes conscious mental states to be determined solely by their function. For a functionalist, the task of producing a strong AI is merely a technical challenge. If we can create a system that functions like us, we can be confident it is conscious like us.<\/p>\n<p>            <a href=\"https:\/\/images.theconversation.com\/files\/697784\/original\/file-20251022-56-x7jokg.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" alt=\"Illustration of a human talking to a robot\" class=\"lazyload\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/file-20251022-56-x7jokg.jpg\"  \/><\/a><\/p>\n<p>              Anyone there?<br \/>\n              <a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/close-common-myna-bird-acridotheres-tristis-2629229511\" rel=\"nofollow noopener\" target=\"_blank\">Littlestar23<\/a><\/p>\n<p>Recently, we have reached the tipping point. Generative AIs such as Chat-GPT are now so advanced that their responses are often indistinguishable from those of a real human \u2013 see <a href=\"https:\/\/richarddawkins.substack.com\/p\/are-you-conscious-a-conversation\" rel=\"nofollow noopener\" target=\"_blank\">this exchange<\/a> between Chat-GPT and Richard Dawkins, for instance. <\/p>\n<p>This issue of whether a machine can fool us into thinking it is human is the subject of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Turing_test\" rel=\"nofollow noopener\" target=\"_blank\">a well-known test<\/a> devised by English computer scientist Alan Turing in 1950. Turing claimed that if a machine could pass the test, we ought to conclude it was genuinely intelligent. <\/p>\n<p>Back in 1950 this was pure speculation, but according to a <a href=\"https:\/\/arxiv.org\/abs\/2503.23674\" rel=\"nofollow noopener\" target=\"_blank\">pre-print study<\/a> from earlier this year \u2013 that\u2019s a study that hasn\u2019t been peer-reviewed yet \u2013 the <a href=\"https:\/\/plato.stanford.edu\/entries\/turing-test\/\" rel=\"nofollow noopener\" target=\"_blank\">Turing test<\/a> has now been passed. Chat-GPT convinced 73% of participants that it was human. <\/p>\n<p>What\u2019s interesting is that nobody is buying it. Experts are not only <a href=\"https:\/\/theconversation.com\/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823\" rel=\"nofollow noopener\" target=\"_blank\">denying that Chat-GPT is conscious<\/a> but seemingly not even <a href=\"https:\/\/theconversation.com\/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090\" rel=\"nofollow noopener\" target=\"_blank\">taking the idea seriously<\/a>. I have to admit, I\u2019m with them. It just doesn\u2019t seem plausible.<\/p>\n<p>The key question is: what would a machine actually have to do in order to convince us?<\/p>\n<p>Experts have tended to focus on the technical side of this question. That is, to discern what technical features a machine or program would need in order to satisfy our best theories of consciousness. A <a href=\"https:\/\/arxiv.org\/abs\/2308.08708\" rel=\"nofollow noopener\" target=\"_blank\">2023 article<\/a>, for instance, as reported <a href=\"https:\/\/theconversation.com\/why-chatgpt-isnt-conscious-but-future-ai-systems-might-be-212860\" rel=\"nofollow noopener\" target=\"_blank\">here in The Conversation<\/a>, compiled a list of 14 technical criteria or \u201cconsciousness indicators\u201d, such as learning from feedback (Chat-GPT didn\u2019t make the grade).<\/p>\n<p>But creating a strong AI is as much a psychological challenge as a technical one. It is one thing to produce a machine that satisfies the various technical criteria that we set out in our theories, but it is quite another to suppose that, when we are finally confronted with such a thing, we will believe it is conscious. <\/p>\n<p>The success of Chat-GPT has already demonstrated this problem. For many, the Turing test was the benchmark of machine intelligence. But if it has been passed, as the pre-print study suggests, the goalposts have shifted. They might well keep shifting as technology improves.<\/p>\n<p>Myna difficulties<\/p>\n<p>This is where we get into the murky realm of an age-old philosophical quandary: <a href=\"https:\/\/www.britannica.com\/topic\/problem-of-other-minds\" rel=\"nofollow noopener\" target=\"_blank\">the problem of other minds<\/a>. Ultimately, one can never know for sure whether anything other than oneself is conscious. In the case of human beings, the problem is little more than idle scepticism. None of us can seriously entertain the possibility that other humans are unthinking automata, but in the case of machines it seems to go the other way. It\u2019s hard to accept that they could be anything but.<\/p>\n<p>A particular problem with AIs like Chat-GPT is that they seem like mere mimicry machines. They\u2019re like the myna bird who learns to vocalise words with no idea of what it is doing or what the words mean. <\/p>\n<p>            <a href=\"https:\/\/images.theconversation.com\/files\/697782\/original\/file-20251022-66-wqqrpv.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" alt=\"Myna bird\" class=\"lazyload\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/file-20251022-66-wqqrpv.jpg\"  \/><\/a><\/p>\n<p>              \u2018Who are you calling a stochastic parrot?\u2019<br \/>\n              <a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/close-common-myna-bird-acridotheres-tristis-2629229511\" rel=\"nofollow noopener\" target=\"_blank\">Mikhail Ginga<\/a><\/p>\n<p>This doesn\u2019t mean we will never make a conscious machine, of course, but it does suggest that we might find it difficult to accept it if we did. And that might be the ultimate irony: succeeding in our quest to create a conscious machine, yet refusing to believe we had done so. Who knows, it might have already happened.<\/p>\n<p>So what would a machine need to do to convince us? One tentative suggestion is that it might need to exhibit the kind of autonomy we observe in many living organisms.<\/p>\n<p>Current AIs like Chat-GPT are purely responsive. Keep your fingers off the keyboard and they\u2019re as quiet as the grave. Animals are not like this, at least not the ones we commonly take to be conscious, like chimps, dolphins, cats and dogs. They have their own impulses and inclinations (or at least appear to), along with the desires to pursue them. They initiate their own actions on their own terms, for their own reasons.<\/p>\n<p>Perhaps if we could create a machine that displayed this type of autonomy \u2013 the kind of autonomy that would take it beyond a mere mimicry machine \u2013 we really would accept it was conscious? <\/p>\n<p>It\u2019s hard to know for sure. Maybe we should ask Chat-GPT.<\/p>\n","protected":false},"excerpt":{"rendered":"As far back as 1980, the American philosopher John Searle distinguished between strong and weak AI. Weak AIs&hellip;\n","protected":false},"author":2,"featured_media":244251,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-244250","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/244250","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=244250"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/244250\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/244251"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=244250"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=244250"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=244250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}