{"id":144609,"date":"2025-11-21T02:46:12","date_gmt":"2025-11-21T02:46:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/144609\/"},"modified":"2025-11-21T02:46:12","modified_gmt":"2025-11-21T02:46:12","slug":"americas-open-source-ai-gambit-two-labs-one-question-can-the-us-compete","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/144609\/","title":{"rendered":"America&#8217;s Open Source AI Gambit: Two Labs, One Question\u2014Can the US Compete?"},"content":{"rendered":"<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Two American AI labs released open-source models this week, each taking dramatically different approaches to the same problem: how to compete with China&#8217;s dominance in publicly accessible AI systems.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Deep Cogito dropped Cogito v2.1, a massive 671-billion-parameter model that its founder, Drishan Arora, calls &#8220;the best open-weight LLM by a U.S. company.&#8221;<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Not so fast, countered The Allen Institute for AI, which just dropped <a href=\"https:\/\/allenai.org\/blog\/olmo3\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">Olmo 3<\/a>, billing it as &#8220;the best fully open base model.&#8221; Olmo 3 boasts complete transparency, including its training data and code.<\/p>\n<p>\ufeff<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Ironically, Deep Cognito&#8217;s flagship model is built on a <a href=\"https:\/\/www.deepcogito.com\/research\/cogito-v2-1\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">Chinese foundation<\/a>. Arora acknowledged on X that Cogito v2.1 &#8220;forks off the open-licensed Deepseek base model from November 2024.&#8221;<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">That sparked some criticism and even debate about whether fine-tuning a Chinese model counts as American AI advancement, or whether it just proves how far U.S. labs have fallen behind.<\/p>\n<p lang=\"en\" dir=\"ltr\">&gt; best open-weight LLM by a US company<\/p>\n<p>this is cool but i\u2019m not sure about emphasizing the \u201cUS\u201d part since the base model is deepseek V3 <a href=\"https:\/\/t.co\/SfD3dR5OOy\" data-wpel-link=\"internal\" rel=\"nofollow\">https:\/\/t.co\/SfD3dR5OOy<\/a><\/p>\n<p>\u2014 elie (@eliebakouch) <a href=\"https:\/\/twitter.com\/eliebakouch\/status\/1991249838680232295?ref_src=twsrc%5Etfw\" data-wpel-link=\"internal\" rel=\"nofollow noopener\" target=\"_blank\">November 19, 2025<\/a><\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Regardless, the efficiency gains Cogito shows over DeepSeek are real.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Deep Cognito claims Cogito v2.1 produces 60% shorter reasoning chains than DeepSeek R1 while maintaining competitive performance.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Using what Arora calls \u201cIterated Distillation and Amplification\u201d\u2014teaching models to develop better intuition through self-improvement loops\u2014the startup trained its model in a mere 75 days on infrastructure from RunPod and Nebius.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">If the benchmarks are true, this would be the most powerful open-source LLM currently maintained by a U.S. team.<\/p>\n<p>Why it matters<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">So far, China has been setting the pace in open-source AI, and U.S. companies increasingly rely\u2014quietly or openly\u2014on Chinese base models to stay competitive.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">That dynamic is risky. If Chinese labs become the default plumbing for open AI worldwide, U.S. startups lose technical independence, bargaining power, and the ability to shape industry standards.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Open-weight AI determines who controls the raw models that every downstream product depends on.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Right now, Chinese\u00a0open-source models (DeepSeek, Qwen, Kimi, MiniMax)\u00a0<a href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/10\/13\/china-us-open-source-ai\/\" target=\"_blank\" rel=\"noopener nofollow\" class=\"sc-adb616fe-0 bJsyml\">dominate global adoption<\/a> because they are cheap, fast, highly efficient, and constantly updated.<\/p>\n<p><img alt=\"\" loading=\"lazy\" width=\"2574\" height=\"1648\" decoding=\"async\" data-nimg=\"1\" class=\"object-contain object-center w-full\" style=\"color:transparent\"   src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/11\/Captura-de-pantalla-2025-11-20-a-las-18.41.28.png@webp.webp\"\/>Image: Artificialanalysis.ai<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Many U.S. startups already build on them, even when they\u00a0<a href=\"https:\/\/www.aljazeera.com\/economy\/2025\/11\/13\/chinas-ai-is-quietly-making-big-inroads-in-silicon-valley\" target=\"_blank\" rel=\"noopener nofollow\" class=\"sc-adb616fe-0 bJsyml\">publicly avoid admitting it<\/a>.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">That means U.S. firms are building businesses on top of foreign intellectual property, foreign training pipelines, and foreign hardware optimizations. Strategically, that puts America in the same position it once faced with semiconductor fabrication: increasingly dependent on someone else\u2019s supply chain.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Deep Cogito\u2019s approach\u2014starting from a DeepSeek fork\u2014shows the upside (rapid iteration) and the downside (dependency).<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The Allen Institute\u2019s approach\u2014building Olmo 3 with full transparency\u2014shows the alternative: if the U.S. wants open AI leadership, it has to rebuild the stack itself, from data to training recipes to checkpoints. That\u2019s labor-intensive and slow, but it preserves sovereignty over the underlying technology.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">In theory, if you already liked DeepSeek and use it online, Cogito will give you better answers most of the time. If you use it via API, you\u2019ll be twice as happy, since you\u2019ll pay less money to generate good replies thanks to its efficiency gains.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The Allen Institute took the opposite tack. The whole family of Olmo 3 models arrives with <a href=\"https:\/\/huggingface.co\/datasets\/allenai\/dolma\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">Dolma 3<\/a>, a 5.9-trillion-token training dataset built from scratch, plus complete code, recipes, and checkpoints from every training stage.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The nonprofit released three model variants\u2014Base, Think, and Instruct\u2014with 7 billion and 32 billion parameters.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">&#8220;True openness in AI isn&#8217;t just about access\u2014it&#8217;s about trust, accountability, and shared progress,&#8221; the institute wrote.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Olmo 3-Think 32B is the first fully open-reasoning model at that scale, trained on roughly one-sixth the tokens of comparable models like Qwen 3, while achieving competitive performance.<\/p>\n<p><img alt=\"\" loading=\"lazy\" width=\"2582\" height=\"1548\" decoding=\"async\" data-nimg=\"1\" class=\"object-contain object-center w-full\" style=\"color:transparent\"   src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/11\/Captura-de-pantalla-2025-11-20-a-las-18.38.41-min.png@webp.webp\"\/>Image: Ai2<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Deep Cognito secured <a href=\"https:\/\/www.finsmes.com\/2025\/08\/deep-cogito-raises-13m-in-seed-funding.html\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">$13 million<\/a> in seed funding led by Benchmark in August. The startup plans to release frontier models up to 671 billion parameters trained on &#8220;significantly more compute with better datasets.&#8221;<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Meanwhile, Nvidia backed Olmo 3&#8217;s development, with vice president Kari Briski <a href=\"https:\/\/allenai.org\/olmo\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">calling it <\/a>essential for &#8220;developers to scale AI with open, U.S.-built models.&#8221;<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The institute trained on Google Cloud&#8217;s H100 GPU clusters, achieving 2.5 times less compute requirements than Meta&#8217;s Llama 3.1 8B<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Cogito v2.1 is available for free online testing <a href=\"https:\/\/chat.deepcogito.com\/\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">here<\/a>. The model can be downloaded <a href=\"https:\/\/huggingface.co\/deepcogito\/cogito-671b-v2.1\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">here<\/a>, but beware: it requires a very powerful card to run.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Olmo is available for testing <a href=\"https:\/\/playground.allenai.org\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">here<\/a>. The models can be downloaded <a href=\"https:\/\/huggingface.co\/collections\/allenai\/olmo-3\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">here<\/a>. These ones are more consumer-friendly, depending on which one you choose.<\/p>\n<p>Generally Intelligent Newsletter<\/p>\n<p>A weekly AI journey narrated by Gen, a generative AI model.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Two American AI labs released open-source models this week, each taking dramatically different approaches to the same problem:&hellip;\n","protected":false},"author":2,"featured_media":144610,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-144609","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/144609","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=144609"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/144609\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/144610"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=144609"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=144609"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=144609"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}