{"id":501539,"date":"2026-02-26T20:27:12","date_gmt":"2026-02-26T20:27:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/501539\/"},"modified":"2026-02-26T20:27:12","modified_gmt":"2026-02-26T20:27:12","slug":"why-i-have-changed-my-mind-about-ai-and-you-should-too","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/501539\/","title":{"rendered":"Why I have changed my mind about AI and you should too"},"content":{"rendered":"<p><img decoding=\"async\" class=\"Image\" alt=\"\" width=\"1350\" height=\"901\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/02\/SEI_286610406.jpg\"   loading=\"eager\" fetchpriority=\"high\" data-image-context=\"Article\" data-image-id=\"2516992\" data-caption=\"It\u2019s time to rethink our relationship with AI\" data-credit=\"Flavio Coelho\/Getty Images\"\/><\/p>\n<p class=\"ArticleImageCaption__Title\">It\u2019s time to rethink our relationship with AI<\/p>\n<p class=\"ArticleImageCaption__Credit\">Flavio Coelho\/Getty Images<\/p>\n<\/p>\n<p>It is undeniable that the launch of ChatGPT was a historically significant event, but is that because it was the first glorious step towards a superintelligent future or because it was the start of a world filled with AI snake-oil salespeople? I\u2019ve long thought that large language models, the technology behind AI chatbots, are fascinating but flawed, putting me firmly in the snake-oil camp. But a week spent vibe coding has revealed something surprising: both the boosters and the sceptics are wrong.<\/p>\n<p>First, I should explain. <a href=\"https:\/\/www.newscientist.com\/article\/2473993-what-is-vibe-coding-should-you-be-doing-it-and-does-it-matter\/\" rel=\"nofollow noopener\" target=\"_blank\">Vibe coding<\/a>, if you aren\u2019t familiar, is a term coined about a year ago by Andrej Karpathy, an AI researcher who co-founded and formerly worked at OpenAI. It refers to the process of developing software by \u201cvibing\u201d with an AI model, instructing it in plain language while letting it generate the actual code. Recently, I\u2019ve seen people saying that the latest tools \u2013 Claude Code and ChatGPT Codex \u2013 have become surprisingly good at coding, such as in a piece in The\u00a0New York Times\u00a0titled \u201c<a href=\"https:\/\/www.nytimes.com\/2026\/02\/18\/opinion\/ai-software.html\" rel=\"nofollow noopener\" target=\"_blank\">The A.I. disruption we\u2019ve been waiting for has arrived<\/a>\u201d.<\/p>\n<p>I decided to experiment with these tools, and I have been astonished by the results. In just a few short days, with only limited experiencing of coding, I have created personally useful apps like an audiobook picker that checks what is available at my local library, and a combined camera and teleprompter app that runs on my phone.<\/p>\n<p>That might sound boring to you, and that is perfectly fine, for reasons I will explain later. What is important here is that this process has seen me engage more deeply with products like ChatGPT than I have before. Previously, I have tried minor experiments, been disgusted at generic writing, <a href=\"https:\/\/www.newscientist.com\/article\/2452746-ive-been-boosting-my-ego-with-a-sycophant-ai-and-it-cant-be-healthy\/\" rel=\"nofollow noopener\" target=\"_blank\">sycophancy<\/a> or inaccurate search results, and bounced off. For these new coding projects, my extended use made me realise something I hadn\u2019t before \u2013 the way LLMs have been productised produces a machine I am destined to hate.<\/p>\n<p>Very few of us have been exposed to a \u201craw\u201d LLM, by which I mean <a href=\"https:\/\/www.newscientist.com\/article\/2384030-how-does-chatgpt-work-and-do-ai-powered-chatbots-think-like-us\/\" rel=\"nofollow noopener\" target=\"_blank\">a statistical model that has been trained on a large collection of data to produce plausibly representative text<\/a>. Instead, the majority of us are using technology that has been mediated through a process called <a href=\"https:\/\/www.newscientist.com\/article\/2450360-ais-are-more-likely-to-mislead-people-if-trained-on-human-feedback\/\" rel=\"nofollow noopener\" target=\"_blank\">reinforcement learning from human feedback<\/a> (RLHF). AI companies use humans to rate the text produced by a raw LLM, rewarding answers that are perceived to be confident, useful and engaging while penalising harmful content or answers that are likely to discourage a majority of users from engaging with their products.<\/p>\n<p>It is this RLHF process that produces the generic \u201cchatbot voice\u201d that you are probably familiar with. It is a process that bakes in the implicit values of the producer, from a general \u201cmove fast and break things\u201d Silicon Valley attitude to the more specific Elon Musk-infused ideology of Grok, the controversial X chatbot.<\/p>\n<p>Currently, it is very difficult to get a chatbot to express uncertainty, contradict the user or arrest forward momentum. This became most obvious to me when I encountered an unsolvable problem with my teleprompter. I had been trying to create an app that would overlay text on my existing camera app, assuming that would be easier than creating a camera from scratch, but the code ChatGPT was producing kept failing. It repeatedly suggested fixes, urging me forwards with the project. It was only when I realised that the intricacies of the Android operating system, which I won\u2019t bore you with, meant making an all-in-one app would be much easier. As soon as I asked ChatGPT to produce this, it worked instantly.<\/p>\n<p>Learning from this, I began instructing ChatGPT to constantly question both itself and me. I demanded vigilant scepticism. \u201cJacob wants the assistant to default to evidence-first analysis: avoid extrapolation, explicitly flag inference vs evidence, and prefer stating uncertainty or stopping when evidence is thin, unless the user asks for speculation,\u201d is just one of the frameworks (generated by itself) that I have imposed into its memory. In other words, I built a model uniquely designed to work with my psychological profile, carefully unpicking OpenAI\u2019s values and replacing them with my own.<\/p>\n<p>It\u2019s not perfect. It is very hard for an LLM to fight its RLHF training, and the default keeps seeping through. But what this means is that I now have a tool that serves as a somewhat-useful cognitive mirror. I didn\u2019t use it to write this article, both because its writing style is still terribly turgid and because New Scientist, quite rightly, has strict rules against AI-generated copy, but I used it to think about this article. I asked my cognitive mirror to probe arguments and counterarguments, rejecting many of its conclusions as false or spurious. I extracted value, but it required caution and work, not letting the AI do the heavy lifting. Crucially, my brain remained fully engaged at all times.<\/p>\n<p>This leads me to reinforce a conclusion I had already reached: engaging with someone else\u2019s AI output is, in almost all cases, functionally useless. You can\u2019t gain anything from AI-generated text that wouldn\u2019t be better received by prompting an AI yourself. I also continue to refute the idea that AI is actually intelligent in any way \u2013 instead, I consider LLMs to be a cognitive aid, like a calculator or word processor. With this framing, as a private tool, not world-conquering machine, I now see the benefit. For that reason, it is right that you shouldn\u2019t care about my teleprompter app. What should excite you is the possibility of solving your own unique problems in your own unique way.<\/p>\n<p>Here\u2019s where our current AI paradigm introduces another issue. In my view, the best LLM would be one that runs on your own computer, with no connection to a private corporation. It should be treated as a dangerous, experimental tool that you have full control over. I\u2019m reminded of the meme that software engineers <a href=\"https:\/\/www.tumblr.com\/michaelblume\/169525456166\/tech-enthusiasts-everything-in-my-house-is-wired\" rel=\"nofollow noopener\" target=\"_blank\">keep a loaded gun next to their printer, in case it makes a noise they don\u2019t recognise<\/a>. Sadly, running your own cutting-edge LLM isn\u2019t currently possible for a variety of reasons, not least that the <a href=\"https:\/\/www.newscientist.com\/article\/2507081-why-is-ai-making-computers-and-games-consoles-more-expensive\/\" rel=\"nofollow noopener\" target=\"_blank\">AI boom is driving up prices of the very hardware you need<\/a>.<\/p>\n<p>I must also address the original sin of LLMs: <a href=\"https:\/\/www.newscientist.com\/article\/mg25834383-300-with-ai-exploiting-businesses-data-when-do-we-get-paid-and-by-whom\/\" rel=\"nofollow noopener\" target=\"_blank\">potential copyright infringement<\/a>. By design, this technology can only be built on data ingested at a large scale, essentially the entire textual record of humanity. It is <a href=\"https:\/\/www.newscientist.com\/article\/2372140-chatgpt-seems-to-be-trained-on-copyrighted-books-like-harry-potter\/\" rel=\"nofollow noopener\" target=\"_blank\">undeniable that firms like OpenAI built their models by using copyrighted text without permission<\/a>, though whether this was actually illegal is the subject of <a href=\"https:\/\/www.newscientist.com\/article\/2502650-ai-firms-began-to-feel-the-legal-wrath-of-copyright-holders-in-2025\/\" rel=\"nofollow noopener\" target=\"_blank\">ongoing court cases<\/a>. A private LLM would have the same issues, but I can see solutions, such as public sector models, effectively pardoned by governments and distributed freely for the benefit of all, not private corporations. I also remain concerned about the <a href=\"https:\/\/www.newscientist.com\/article\/2503556-ai-power-use-forecast-finds-the-industry-far-off-track-to-net-zero\/\" rel=\"nofollow noopener\" target=\"_blank\">environmental impact of data centres<\/a>, but again this could be partly mitigated by a wider distribution of LLMs running on our own machines.<\/p>\n<p>I accept that some people reading this will accuse me of having sold out to the tech bros. All I can say to that is that I haven\u2019t revised my long-held position on LLMs as a technology that is fascinating, dangerous and occasionally extraordinary.<\/p>\n<p>What I have realised is the main way that we are engaging with the technology, via slick chatbots like ChatGPT, is where so much of the harm comes in and is allowed to pass out into the world. LLMs shouldn\u2019t be settled and productised, forced into every part of our lives with a sparkling emoji that wants to be your friend. It would be much better if we used these tools mindfully, with increased friction and full awareness of and caution against the potential harm they can cause. Here, a useful metaphor rears its fanged head. I don\u2019t want OpenAI\u2019s snake oil. I want snakes.<\/p>\n<p class=\"ArticleTopics__Heading\">Topics:<\/p>\n","protected":false},"excerpt":{"rendered":"It\u2019s time to rethink our relationship with AI Flavio Coelho\/Getty Images It is undeniable that the launch of&hellip;\n","protected":false},"author":2,"featured_media":501540,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,2140,61],"class_list":{"0":"post-501539","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-chatgpt","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/501539","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=501539"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/501539\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/501540"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=501539"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=501539"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=501539"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}