{"id":454608,"date":"2026-02-03T00:14:14","date_gmt":"2026-02-03T00:14:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/454608\/"},"modified":"2026-02-03T00:14:14","modified_gmt":"2026-02-03T00:14:14","slug":"i-tried-a-claude-code-alternative-thats-local-open-source-and-completely-free-how-it-works","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/454608\/","title":{"rendered":"I tried a Claude Code alternative that&#8217;s local, open source, and completely free &#8211; how it works"},"content":{"rendered":"<p> <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2026\/02\/how-to-get-started-with-goose-a-free-open-source-alternative-to-claude-code.jpg\" alt=\"How to get started with Goose, a free open-source alternative to Claude Code\" width=\"1280\" height=\"737.28\" fetchpriority=\"low\"\/>   Elyse Betters Picaro \/ ZDNET<\/p>\n<p>Follow ZDNET:\u00a0<a href=\"https:\/\/cc.zdnet.com\/v1\/otc\/00hQi47eqnEWQ6T9d4QLBUc?element=BODY&amp;element_label=Add+us+as+a+preferred+source&amp;module=LINK&amp;object_type=text-link&amp;object_uuid=4fb444c8-973b-4291-b759-aa6f53cf4ffc&amp;position=1&amp;template=article&amp;track_code=__COM_CLICK_ID__&amp;url=https%3A%2F%2Fwww.google.com%2Fpreferences%2Fsource%3Fq%3Dzdnet.com&amp;view_instance_uuid=1ba8988a-c60a-48a2-bf62-81224bf5fcc1&amp;split_test_identifier=deals_module&amp;split_test_variant=test2\" rel=\"noopener nofollow sponsored\" target=\"_blank\">Add us as a preferred source<\/a>\u00a0on Google.<\/p>\n<p> ZDNET&#8217;s key takeaways Free AI tools Goose and Qwen3-coder may replace a pricey Claude Code plan.Setup is straightforward but requires a powerful local machine.Early tests show promise, though issues remain with accuracy and retries.<\/p>\n<p>Jack Dorsey is the founder of Twitter (now X), Square (now Block), and Bluesky (still blue). Back in July, he posted a <a href=\"https:\/\/x.com\/jack\/status\/1948128026857725984\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">fairly cryptic statement on X<\/a>, saying &#8220;goose + qwen3-coder = wow&#8221;.<\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/ive-tested-free-vs-paid-ai-coding-tools-heres-which-one-id-actually-use\/\" rel=\"nofollow noopener\" target=\"_blank\">I&#8217;ve tested free vs. paid AI coding tools &#8211; here&#8217;s which one I&#8217;d actually use<\/a><\/p>\n<p>Since then, interest has grown in both Goose and Qwen3-coder. <a href=\"https:\/\/www.zdnet.com\/article\/blocks-new-open-source-ai-agent-goose-lets-you-change-direction-mid-air\/\" rel=\"nofollow noopener\" target=\"_blank\">Goose<\/a>, developed by Dorsey&#8217;s company Block, is an open-source agent framework, similar to <a href=\"https:\/\/www.zdnet.com\/article\/i-used-claude-code-to-vibe-code-mac-app\/\" rel=\"nofollow noopener\" target=\"_blank\">Claude Code<\/a>. <a href=\"https:\/\/www.zdnet.com\/article\/alibabas-qwen-ai-chatbot-boasts-10-million-downloads-in-its-first-week-heres-what-it-offers\/\" rel=\"nofollow noopener\" target=\"_blank\">Qwen3-coder<\/a> is a coding-centric large language model similar to Sonnet-4.5. Both are free.  <\/p>\n<p>Together, suggests the internet, they can combine to create a fully free competitor to Claude Code. But can they? Really? I decided to find out. <\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/i-used-claude-code-to-vibe-code-mac-app\/\" rel=\"nofollow noopener\" target=\"_blank\">I used Claude Code to vibe code a Mac app in 8 hours, but it was more work than magic<\/a><\/p>\n<p>This is the first of three articles that will discuss the integration of Goose (the agent framework), <a href=\"https:\/\/www.zdnet.com\/article\/i-tried-local-ai-on-m1-mac-brutal-experience\/\" rel=\"nofollow noopener\" target=\"_blank\">Ollama<\/a> (an LLM server), and Qwen3-coder (the LLM).\u00a0 <\/p>\n<p>In this article, I&#8217;ll show you how to get everything working. In the next article, I&#8217;ll give you a more in-depth understanding of the roles each of these three tools plays in the AI agent coding process. And then, finally, I&#8217;ll attempt to use these tools to build a fully powered iPad app as an extension of the apps I&#8217;ve been building with Claude Code.  <\/p>\n<p>Okay, let&#8217;s get started. I&#8217;m building this on my <a href=\"https:\/\/www.zdnet.com\/article\/best-macbooks\/\" rel=\"nofollow noopener\" target=\"_blank\">Mac<\/a>, but you can install all three tools on your <a href=\"https:\/\/www.zdnet.com\/article\/best-windows-laptop\/\" rel=\"nofollow noopener\" target=\"_blank\">Windows<\/a> or <a href=\"https:\/\/www.zdnet.com\/article\/best-linux-laptop\/\" rel=\"nofollow noopener\" target=\"_blank\">Linux<\/a>\u00a0machine, if that&#8217;s how you roll.  <\/p>\n<p>  Downloading the software  <\/p>\n<p>You&#8217;ll need to start by downloading both Goose and Ollama. You&#8217;ll later download the Qwen3-coder model from within Ollama: <\/p>\n<p>I originally downloaded and installed Goose first. But I couldn&#8217;t get it to talk to Ollama. Can you guess what I did wrong? Yep. I hadn&#8217;t yet downloaded and set up Ollama.  <\/p>\n<p>  Installing Ollama and Qwen3-coder  <\/p>\n<p>My recommendation is to install Ollama first. As I mentioned, I&#8217;m using MacOS, but you can use whatever you prefer. You can also install a command-line version of Ollama, but I prefer the app version, so that&#8217;s what we&#8217;ll be exploring: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"download-ollama\" width=\"1280\" height=\"552.6241134751773\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Download Ollama. Then, double-click the installer. Once the application loads, you&#8217;ll see a chat-like interface. To the right, you&#8217;ll see the model. Mine defaulted to gpt-oss-20b.\u00a0 <\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/google-gemini-personal-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">Gemini can look through your emails and photos to &#8216;help&#8217; you now &#8211; but should you let it?<\/a><\/p>\n<p>Click that, and a model list will pop up. I chose Qwen3-coder:30b, where 30b refers to the number of model parameters. This is a coding-optimized model with about 30 billion parameters: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"choose-model\" width=\"1280\" height=\"999.2982456140351\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Note that the model won&#8217;t download until it&#8217;s forced to answer a prompt. I typed the word &#8220;test,&#8221; and the model downloaded: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"download-model\" width=\"1280\" height=\"999.2982456140351\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Note that this model is 17GB, so make sure you have enough storage space. This requirement highlights one of the big benefits of this whole project. Your <a href=\"https:\/\/www.zdnet.com\/article\/5-reasons-i-use-local-ai-on-my-desktop-instead-of-chatgpt-gemini-or-claude\/\" rel=\"nofollow noopener\" target=\"_blank\">AI is local<\/a>, running on your machine. You&#8217;re not sending anything to the cloud.  <\/p>\n<p>Also: <a href=\"https:\/\/www.zdnet.com\/article\/how-to-easily-run-your-favorite-local-ai-models-on-linux-with-this-handy-app\/\" rel=\"nofollow noopener\" target=\"_blank\">How to easily run your favorite local AI models on Linux with this handy app<\/a><\/p>\n<p>Once you&#8217;ve installed Qwen3-coder, you need to make the Ollama instance visible to other applications on your computer. To take this step, select Settings from the Ollama menu on your menu bar: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"allow-access\" width=\"1280\" height=\"999.2982456140351\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Turn on Expose Ollama to the network. I let Ollama install itself in the .ollama directory. This approach hides the directory, so remember that you have a 17GB file buried in there.\u00a0 <\/p>\n<p>Finally, I set my context length to 32K. I have 128GB of RAM on my machine, so if I start to run out of context, I&#8217;ll boost it. But I wanted to see how well this approach worked with a smaller context space.  <\/p>\n<p>Also, notice that I did not sign in to Ollama. You can create an account and use some cloud services. But we&#8217;re attempting to do this entirely for free and entirely on the local computer, so I&#8217;m avoiding signing in whenever I can.  <\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/3-ways-to-determine-when-delegate-ai-agent\/\" rel=\"nofollow noopener\" target=\"_blank\">Is your AI agent up to the task? 3 ways to determine when to delegate<\/a><\/p>\n<p>And that&#8217;s it for Ollama and Qwen3-coder. You will need to have Ollama launched and running whenever you use Goose, but you probably won&#8217;t interact with it much after this.  <\/p>\n<p>  Installing Goose  <\/p>\n<p>Next up, let&#8217;s install Goose. Go ahead and run the installer. As with Ollama, there are multiple Goose implementations. I chose the MacOS Apple Silicon Desktop version: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"download-goose\" width=\"1280\" height=\"1030.4995904995906\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Once you launch Goose for the first time, you&#8217;ll get this Welcome screen. You have several configuration choices, but since we&#8217;re going for an all-free setup, go down to the Other Providers section and click Go to Provider Settings: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"welcome-goose\" width=\"1280\" height=\"1394.2651036970244\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Here, you&#8217;ll see a very large list of various agent tools and LLMs you can run. Scroll down, find Ollama, and hit Configure: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"providers\" width=\"1280\" height=\"1394.2651036970244\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Once you do that step, you&#8217;ll be asked to Configure Ollama. This is where I got a bit confused because, silly me, I thought &#8220;Configure Ollama&#8221; meant I was actually configuring Ollama. Not so much. What you&#8217;re doing (here, and for all the other providers) is configuring your connection, in this case to Ollama: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"goose-ollama\" width=\"1280\" height=\"1291.0503597122304\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>You&#8217;ll be asked to choose a model. Once again, choose qwen3-coder:30b: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"goose-model\" width=\"1280\" height=\"948.0713128038897\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Once you&#8217;ve chosen both Ollama and qwen3-coder:30b, hit Select Model: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"select-model\" width=\"1280\" height=\"843.1918505942275\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Congratulations. You&#8217;ve now installed and configured a local coding agent, running on your computer.  <\/p>\n<p>  Taking Goose for a spin  <\/p>\n<p>As with almost any other chatbot, you&#8217;ll want to type a prompt into the prompt area. But first, it&#8217;s not a bad idea to let Goose know the directory you&#8217;ll be using. For my initial test, I set Goose to work from a temporary folder. You specify this at (1) by tapping the directory already shown: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"test1\" width=\"1280\" height=\"945.5948553054662\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>Also note that the model you&#8217;re running is indicated at (2). You can set Goose up to run multiple models, but we&#8217;re just working with this one for now.  <\/p>\n<p>As a test, I used my <a href=\"https:\/\/www.zdnet.com\/article\/how-i-test-an-ai-chatbots-coding-ability-and-you-can-too\/\" rel=\"nofollow noopener\" target=\"_blank\">standard test challenge<\/a>\u00a0&#8212; building a simple WordPress plugin. In its first run, Goose\/Qwen3 failed. It generated a plugin, but it didn&#8217;t work: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"test2\" width=\"1280\" height=\"814.6260387811634\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>In my second and third tries, after explaining what didn&#8217;t work to Goose\/Qwen3, it failed, and failed again.  <\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/ai-agents-primitive-reinforcement-learning-complex-memory\/\" rel=\"nofollow noopener\" target=\"_blank\">True agentic AI is years away &#8211; here&#8217;s why and how we get there<\/a><\/p>\n<p>By the third try, it ran the randomization, but didn&#8217;t completely follow directions, which kind of defeated the whole purpose of the original plugin: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"test3\" width=\"1280\" height=\"814.6260387811634\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET<\/p>\n<p>It took five rounds for Goose to get it right, and it was very, very pleased with itself about how right it expected itself to be: <\/p>\n<p> <img decoding=\"async\" src=\"\" alt=\"test4\" width=\"1280\" height=\"814.6260387811634\" fetchpriority=\"low\"\/>   Screenshot by David Gewirtz\/ZDNET  First impressions  <\/p>\n<p>So what do I think about this approach? I was disappointed it took Goose five tries to get my little test to work. When I tested a bunch of free chatbots with this assignment, all but <a href=\"https:\/\/www.zdnet.com\/article\/xs-grok-did-surprisingly-well-in-my-ai-coding-tests\/\" rel=\"nofollow noopener\" target=\"_blank\">Grok<\/a> and a pre-Gemini 3 <a href=\"https:\/\/www.zdnet.com\/article\/how-to-use-google-gemini-to-find-cheapest-flights\/\" rel=\"nofollow noopener\" target=\"_blank\">Gemini<\/a>\u00a0got my little test right on the first try.  <\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/how-i-test-an-ai-chatbots-coding-ability-and-you-can-too\/\" rel=\"nofollow noopener\" target=\"_blank\">How I test an AI chatbot&#8217;s coding ability &#8211; and you can, too<\/a><\/p>\n<p>But a <a href=\"https:\/\/www.zdnet.com\/article\/ive-tested-free-vs-paid-ai-coding-tools-heres-which-one-id-actually-use\/\" rel=\"nofollow noopener\" target=\"_blank\">big difference between chatbot coding and agentic coding<\/a> is that agentic coding tools like Claude Code and Goose work on the actual source code. Therefore, repeated corrections do improve the actual codebase.  <\/p>\n<p>When my colleague <a href=\"https:\/\/www.zdnet.com\/article\/tried-local-ai-on-m1-mac-brutal-experience\/\" rel=\"nofollow noopener\" target=\"_blank\">Tiernan Ray tried Ollama on his 16GB M1 Mac<\/a>, he found performance was unbearable. But I&#8217;m running this setup on an <a href=\"https:\/\/www.zdnet.com\/article\/i-replaced-my-windows-pc-with-a-mac-studio-for-a-week-here-are-my-takeaways-so-far\/\" rel=\"nofollow noopener\" target=\"_blank\">M4 Max Mac Studio<\/a>\u00a0with 128GB of RAM. I even had Chrome, Fusion, Final Cut, VS Code, Xcode, Wispr Flow, and Photoshop open at the same time. <\/p>\n<p>So far, I&#8217;ve only run a fairly simple programming test, but I found that overall performance is quite good. I didn&#8217;t see a tangible difference in turnaround from prompts between the local instance running Goose on my Mac Studio and local\/cloud hybrid products like Claude Code and OpenAI Codex that use the AI companies&#8217; enormous infrastructures.  <\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/4-new-roles-will-lead-agentic-ai-revolution\/\" rel=\"nofollow noopener\" target=\"_blank\">4 new roles will lead the agentic AI revolution &#8211; here&#8217;s what they require<\/a><\/p>\n<p>But these are still first impressions. I&#8217;ll be better able to tell you if I think this free solution can replace the spendy alternatives like Claude Code&#8217;s $100\/mo Max plan or OpenAI&#8217;s $200\/mo Pro plan once I run a big project through it. That analysis is still to come, so stay tuned.  <\/p>\n<p>Have you tried running a coding-focused LLM locally with tools like Goose, Ollama, or Qwen? How did setup go for you, and what hardware are you running it on? If you&#8217;ve used cloud options like Claude or OpenAI Codex, how does local performance and output quality compare? Let us know in the comments below.  <\/p>\n<p>You can follow my day-to-day project updates on social media. Be sure to subscribe to <a href=\"https:\/\/advancedgeekery.substack.com\/\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">my weekly update newsletter<\/a>, and follow me on Twitter\/X at <a href=\"https:\/\/twitter.com\/davidgewirtz\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">@DavidGewirtz<\/a>, on Facebook at <a href=\"https:\/\/www.facebook.com\/davidgewirtz\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">Facebook.com\/DavidGewirtz<\/a>, on Instagram at <a href=\"https:\/\/www.instagram.com\/DavidGewirtz\/\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">Instagram.com\/DavidGewirtz<\/a>, on Bluesky at <a href=\"https:\/\/bsky.app\/profile\/davidgewirtz.com\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">@DavidGewirtz.com<\/a>, and on YouTube at <a href=\"https:\/\/www.youtube.com\/user\/DavidGewirtzTV\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">YouTube.com\/DavidGewirtzTV<\/a>.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><script async src=\"\/\/www.instagram.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Elyse Betters Picaro \/ ZDNET Follow ZDNET:\u00a0Add us as a preferred source\u00a0on Google. ZDNET&#8217;s key takeaways Free AI&hellip;\n","protected":false},"author":2,"featured_media":454609,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-454608","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/454608","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=454608"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/454608\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/454609"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=454608"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=454608"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=454608"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}