{"id":320015,"date":"2025-12-17T00:58:16","date_gmt":"2025-12-17T00:58:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/320015\/"},"modified":"2025-12-17T00:58:16","modified_gmt":"2025-12-17T00:58:16","slug":"heres-why-i-ditched-chatgpt-and-moved-to-local-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/320015\/","title":{"rendered":"Here&#8217;s why I ditched ChatGPT and moved to local AI"},"content":{"rendered":"<p><img class=\"e_jg\" decoding=\"async\" loading=\"eager\"  title=\"Anything LLM on a Mac\"  alt=\"Anything LLM on a Mac\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/Anything-LLM-on-a-Mac-scaled.jpg\"\/><\/p>\n<p>Dhruv Bhutani \/ Android Authority<\/p>\n<p>I was one of the first people to jump on the <a href=\"https:\/\/www.androidauthority.com\/ai-tools-not-chatgpt-3612782\/\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> bandwagon. The convenience of having an all-knowing research assistant available at the tap of a button has its appeal, and for a long time, I didn\u2019t care much about the ramifications of using AI. Fast forward to today, and it\u2019s a whole different world. There\u2019s no getting around the fact that you are feeding an immense amount of deeply personal information, from journal entries to sensitive work emails, into a black box owned by trillion-dollar corporations. What could go wrong?<\/p>\n<p>Now, there\u2019s no going back from AI, but there are use cases for it where it can work as a productivity multiplier. And that\u2019s why I\u2019ve been going down the rabbit hole of researching <a href=\"https:\/\/www.androidauthority.com\/how-to-download-and-run-deepseek-3520820\/\" rel=\"nofollow noopener\" target=\"_blank\">local AI<\/a>. If you\u2019re not familiar with the concept, it\u2019s actually fairly simple. It is entirely possible to run a large language model, the brains behind a tool like ChatGPT, right off your computer or even a phone. Of course, it won\u2019t be as capable or all-knowing as ChatGPT, but depending on your use case, it might still be effective enough. Better still, no data leaves your device, nor is there a monthly subscription fee to consider. But if you\u2019re concerned that pulling this off requires an engineering degree, think again. In 2025, running a local LLM is shockingly easy, with tools like LM Studio and Ollama making it as simple as installing an app. After spending the last few months running my own local AI, I can safely say I\u2019m never going back to being purely cloud-dependent. Here\u2019s why.<\/p>\n<p>Have you considered running a local AI on your computer?<\/p>\n<p>155 votes<\/p>\n<p>Yes, I&#8217;m already using it.<\/p>\n<p>26%<\/p>\n<p>Yes, I&#8217;ve considered it.<\/p>\n<p>39%<\/p>\n<p>No, I didn&#8217;t know it was possible.<\/p>\n<p>24%<\/p>\n<p>No, I don&#8217;t want to. ChatGPT and Gemini work for me.<\/p>\n<p>11%<\/p>\n<p>Privacy<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"AnythingLM homepage\"  alt=\"AnythingLM homepage\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/AnythingLM-homepage.jpg\"\/><\/p>\n<p>Dhruv Bhutani \/ Android Authority<\/p>\n<p>We\u2019ve all pasted something into ChatGPT that we probably shouldn\u2019t have. Perhaps it was a code snippet at work that you were trying to make sense of. Or perhaps a copy of a contract, maybe some embargoed information, or even just a really personal journal entry that you don\u2019t feel comfortable exposing to our corporate overlords. Every time you hit send on a cloud-based AI, that data is processed on an entirely opaque server that will inevitably use your data for the greater good of AI-kind.<\/p>\n<p>Here\u2019s the deal. Alongside my journalistic endeavors, I run a business where I\u2019m regularly exposed to NDA-protected information. Beyond the obvious privacy risk, it would be illegal for me to share this information with a public AI tool. However, running a local LLM flips the script entirely. I\u2019ve tried many tools, but these days, I\u2019m testing out AnythingLLM. It\u2019s a fantastically simple desktop tool that lets you chat with your documents entirely on your own computer. This lets me do things like feed it tax statements, invoices, bank statements, and even NDA-protected documents, and ask it to summarise things like expenses or flag clauses that I should keep an eye out for. Because the entire LLM is running on my computer, I know this data isn\u2019t being beamed to offshore servers for processing, and it empowers me to have faster AI-driven analysis for data that is strictly not meant to be shared. Effectively, this is like running the enterprise version of ChatGPT or <a href=\"https:\/\/www.androidauthority.com\/chatgpt-vs-bing-chat-3292126\/\" rel=\"nofollow noopener\" target=\"_blank\">Copilot<\/a> but on your computer with little to no cost attached.<\/p>\n<p>Grammar checking<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"AnythingLM grammar check\"  alt=\"AnythingLM grammar check\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/AnythingLM-grammar-check.jpg\"\/><\/p>\n<p>Dhruv Bhutani \/ Android Authority<\/p>\n<p>I write for a living, and as my managing editors would attest, a grammar check is a necessary evil for the job. But tools like <a href=\"https:\/\/www.androidauthority.com\/apple-intelligence-vs-grammarly-feature-3466509\/\" rel=\"nofollow noopener\" target=\"_blank\">Grammarly<\/a> are, by definition, keyloggers. To get anything done, they have to read and process whatever you write. That might be fine for a quick email, but I\u2019m not comfortable using it when working with privileged information. Moreover, to get proper utility out of these tools, you need to sign up for a subscription. The cons definitely outweigh the pros.<\/p>\n<p>So, my solution has been to use one of the many offline LLMs to run as a high-powered spellchecker for me. For example, you can install LM Studio running a model like Llama 3 or Mistral, and give it a highly detailed command telling it to fix grammar and spelling without changing the tone or phrasing. That\u2019s all it takes. I can paste in entire articles into the local LLM and get instant feedback and corrections on my writing. You can even train it on specific style guides, like, say, a preference for or against Oxford commas, and the local LLM will follow those rules. Effectively, you can trust a local LLM to work as a second set of eyes without any fear of information leaking out to the broader web.<\/p>\n<p>Coding<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"Anything LM code checking\"  alt=\"Anything LM code checking\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/Anything-LM-code-checking.jpg\"\/><\/p>\n<p>Dhruv Bhutani \/ Android Authority<\/p>\n<p>Much has been said about the spaghetti code generated by LLMs. However, it is hard to deny that it can be useful for generating boilerplate code to give you a head start. Look, I\u2019m a fairly rusty engineer fifteen years out of university who still likes to dabble in code once in a while over the weekend. For someone like me, AI has been incredible at making sense of code snippets from open-source apps. Turns out, I don\u2019t really need Copilot or Claude for this. For basic stuff, I can quite literally just dump a code fragment into Ollama and ask the AI to explain the logic to me. When it comes to writing code, I\u2019ve dabbled in using extensions for VS Code that can plug Ollama right into the code editor.<\/p>\n<p>Just download a coding-specific model, like, say, <a href=\"https:\/\/deepseekcoder.github.io\/\" target=\"_blank\" rel=\"nofollow noopener\">DeepSeek Coder<\/a> locally. The LLM offers suggestions to autocomplete code, refactoring, and explanations for bugs when things inevitably fail to compile. And it does all of this within the IDE itself without phoning back home. It\u2019s the GitHub Copilot experience, but running entirely on my computer. For developers, this means that they could be working on private projects or client code with absolute certainty that they\u2019re not leaking intellectual property to a public-facing LLM. Plus, you might just notice an improved user experience when working with a developer-focused LLM versus a general-purpose model.<\/p>\n<p>Uncensored creativity<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"ChatGPT censorship\"  alt=\"ChatGPT censorship\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/ChatGPT-censorship-scaled.png\"\/><\/p>\n<p>Dhruv Bhutani \/ Android Authority<\/p>\n<p>Here\u2019s where things get interesting. Public-facing AI models are built with guardrails in mind. Anything deemed too extreme or potentially dangerous is effectively blocked off from access. This can be problematic if you\u2019re a naturally curious person, or if you want to do some research that goes a step beyond what ChatGPT can offer. In my case, one of my weekend hobbies is to write gritty thriller short stories, and when I want to do some research for a thriller or horror arc, ChatGPT just doesn\u2019t go all that far.<\/p>\n<p>However, if you are running a local LLM, you can download an unlocked model that is designed to follow instructions without moralizing. This can unlock creative freedom by letting you develop characters with whatever kind of personality you are aiming for. The same freedom can also be useful if you dabble in board games and want to generate a gruesome new DnD campaign.<\/p>\n<p>It works when ChatGPT won\u2019t<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"Ollama models list\"  alt=\"Ollama models list\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/Ollama-models-list.jpg\"\/><\/p>\n<p>Robert Triggs \/ Android Authority<\/p>\n<p>I travel a lot and prefer to have most of my commonly used services be available offline. Predictably, reliance on cloud AI can become a major pain point the moment you step on a plane or visit a remote location with spotty internet. Even if you have WiFi on a plane, the latency can often lead to connection dropouts mid-query. It\u2019s not a very good user experience.<\/p>\n<p>My local AI setup, on the other hand, travels with me. I\u2019ve got Ollama set up on my laptop, and I can spin it up whenever I need it. As geeky as that might sound, it\u2019s been a revelation for productivity, as I can get pretty much all the things I use ChatGPT for running on my computer at 36,000 feet in the air. If you want to take it a step further, you could even use an app like SmolChat to run a smaller <a href=\"http:\/\/androidauthority.com\/install-deepseek-android-3521203\/\" target=\"_blank\" rel=\"nofollow noopener\">language model on your Android phone<\/a> and use it for simpler tasks. There\u2019s a lot of flexibility here.<\/p>\n<p>Subscription fatigue<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"DeepSeek LM Studio setup\"  alt=\"DeepSeek LM Studio setup\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/DeepSeek-LM-Studio-setup.jpg\"\/><\/p>\n<p>Robert Triggs \/ Android Authority<\/p>\n<p>In an era where everything from media consumption to reading the news is becoming a <a href=\"https:\/\/www.androidauthority.com\/best-subscriptions-apps-services-3565501\/\" rel=\"nofollow noopener\" target=\"_blank\">subscription<\/a>, it should come as no surprise that getting a decent AI experience is also, you guessed it, locked behind a subscription. I understand that building and running an AI model is extremely expensive. However, no single GPT is good at every task, and if I started running up the numbers, that $20 a month fee for every LLM I sign up for starts looking very spendy very quickly.<\/p>\n<p>Local AI is the buy-it-for-life alternative. The only real cost is the hardware that you run it on. If you\u2019re on a PC, you\u2019ll want a beefy graphics card. On the Apple side of things, even a MacBook Air is able to run reasonably sized models at a decent clip. The open-source models themselves are free. In fact, if you have a recent computer, you likely already have all the hardware you need to get your own local AI instance up and running. Moreover, while you have to wait for OpenAI to release its yearly upgrade to ChatGPT, you can test out new models and variations practically weekly via Hugging Face.<\/p>\n<p>Speed and latency<\/p>\n<p><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"LMStudio Deepseek custom google sheet\"  alt=\"LMStudio Deepseek custom google sheet\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/LMStudio-Deepseek-custom-google-sheet.png\"\/><\/p>\n<p>Dhruv Bhutani \/ Android Authority<\/p>\n<p>While your computer is obviously not going to compete with the massively overpowered data centers running ChatGPT, depending on how you use these LLMs, you might observe that the gap is smaller than you\u2019d imagine. Running locally on reasonably powered hardware, things like text generation and summarisation can often happen just as quickly as a commercial tool, and sometimes even quicker. Additionally, you don\u2019t have to face the issue of latency or server load. If you\u2019ve used ChatGPT with any frequency, you\u2019ve inevitably come across a processing wheel, or sometimes even the dreaded connection timeout. None of those are things when your AI model runs on your computer.<\/p>\n<p>Is running a local AI model for everyone?<\/p>\n<p>Look, I\u2019m not gonna sugarcoat it. Local AI isn\u2019t perfect, nor is it for everyone. If you are using Claude to solve complex mathematical problems, or want the absolute state-of-the-art reasoning capabilities available, a massive cloud model like ChatGPT 5.1 or Claude is going to be your best bet. They simply have a much larger parameter base to work from than what can fit on your computer. Nor is your computer powerful enough to run them. But you don\u2019t necessarily need that power on a day-to-day basis. If my experience is anything to go by, 90% of my daily AI tasks like summarisation, editing, code explanation, and brainstorming work perfectly fine with local models.<\/p>\n<p class=\"p1\">Is it a nerdy activity? Yes. Is it fun to dabble in? Also yes. It\u2019s a bit like the early wild, wild west days of the internet, except entirely in your control.<\/p>\n<p> Don\u2019t want to miss the best from Android Authority?<\/p>\n<p><a href=\"https:\/\/andauth.co\/AAGooglePreferredSource\" class=\"e_rm\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"google preferred source badge light@2x\"  alt=\"google preferred source badge light@2x\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/google_preferred_source_badge_light@2x.png\"\/><img class=\"e_jg\" decoding=\"async\" loading=\"lazy\"  title=\"google preferred source badge dark@2x\"  alt=\"google preferred source badge dark@2x\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/google_preferred_source_badge_dark@2x.png\"\/><\/a><\/p>\n<p>Thank you for being part of our community. Read our\u00a0<a class=\"c-link\" href=\"https:\/\/www.androidauthority.com\/android-authority-comment-policy\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" data-stringify-link=\"https:\/\/www.androidauthority.com\/android-authority-comment-policy\/\" data-sk=\"tooltip_parent\">Comment Policy<\/a> before posting.<\/p>\n","protected":false},"excerpt":{"rendered":"Dhruv Bhutani \/ Android Authority I was one of the first people to jump on the ChatGPT bandwagon.&hellip;\n","protected":false},"author":2,"featured_media":320016,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-320015","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/320015","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=320015"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/320015\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/320016"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=320015"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=320015"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=320015"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}