{"id":185130,"date":"2025-12-15T02:54:09","date_gmt":"2025-12-15T02:54:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/185130\/"},"modified":"2025-12-15T02:54:09","modified_gmt":"2025-12-15T02:54:09","slug":"2025-open-models-year-in-review","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/185130\/","title":{"rendered":"2025 Open Models Year in Review"},"content":{"rendered":"<p>Welcome to the first Artifacts Recap, where we highlight the most notable and impactful open model releases of this year. And what a year it has been! Starting into the year, the open model landscape was seen as lagging behind severely, with open models being mostly a choice for those who needed privacy or wanted to fine-tune models for their use cases. <\/p>\n<p>While these are still compelling reasons for open models, the performance in this year has increased dramatically during the last 12 months. In 2024, the ecosystem was mostly relying on Llama 3 and looking ahead to the next generation of that model family, while Qwen2.5, QwQ and DeepSeek V2 \/ V2.5 \/ V3 were known to those deep into the ecosystem as capable but still niche picks. In 2025, DeepSeek and Qwen became household names with <a href=\"https:\/\/www.interconnects.ai\/p\/deepseek-r1-recipe-for-o1\" rel=\"nofollow noopener\" target=\"_blank\">R1<\/a> and <a href=\"https:\/\/www.interconnects.ai\/p\/qwen-3-the-new-open-standard\" rel=\"nofollow noopener\" target=\"_blank\">Qwen 3<\/a> respectively, which resulted in particular a lot of Chinese companies opening their models as well. <\/p>\n<p>As a result, the open ecosystem has immensely accelerated in terms of capabilities, rivaling closed models on most key benchmarks. It is a much more nuanced debate on if they\u2019re delivering as much in real-world usage, where closed models still dominate.<\/p>\n<p>Selecting any number of models as the \u201cbest few\u201d is a nearly impossible task, as the ecosystem is growing so rapidly. There are many more categories of open models that are relevant than just the biggest, text-only models. Open models thrive as being the default for many niche use-cases across modalities and compute perspectives.<\/p>\n<p>To put the scale of our open ecosystem monitoring into perspective: Each day, around 1,000 &#8211; 2,000 models are uploaded to HuggingFace. Out of these 30,000 &#8211; 60,000 models a month, we select roughly 50 for <a href=\"https:\/\/www.interconnects.ai\/t\/artifacts-log\" rel=\"nofollow noopener\" target=\"_blank\">the Artifacts<\/a> series model roundups, which results in us covering 600 models a year. This of course means that some models didn\u2019t make the cut. <\/p>\n<p>Do you think we missed an obvious one? Leave your choice in the comments!<\/p>\n<p>In this post, we start by highlighting our top models of the year in terms of their influence on the AI ecosystem broadly and the trends of open models specifically. We conclude with the complete tier list across model makers in the U.S., China, and the world, based on contributions in 2025.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!v3tx!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ed6b1e9-849f-4026-83a0-4ea18786d6a1_1024x560.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/7ed6b1e9-849f-4026-83a0-4ea18786d6a1_1024.jpeg\" width=\"1024\" height=\"560\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/7ed6b1e9-849f-4026-83a0-4ea18786d6a1_1024x560.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:560,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>The models that defined this year\u2019s releases and had outsized impact, even outside the open model space.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/deepseek-r1-recipe-for-o1\" rel=\"nofollow noopener\" target=\"_blank\">DeepSeek R1<\/a>: Yes, that one was released this year! On January 20th, to be precise. It is hard to overstate the impact this model release has had, both on the open model, as well as the general AI landscape. Not only did it show that a small team is able to push forward with innovation, it also was released under the MIT license \u2014 while its predecessor, DeepSeek V3, used a custom, <a href=\"https:\/\/github.com\/deepseek-ai\/DeepSeek-V3\/blob\/main\/LICENSE-MODEL\" rel=\"nofollow noopener\" target=\"_blank\">DeepSeek License<\/a> with usage restrictions. This move inspired a lot of (Chinese) labs to release their models openly and under an open license as well. Remember the times when Qwen had their own license <a href=\"https:\/\/huggingface.co\/Qwen\/Qwen2.5-VL-72B-Instruct\" rel=\"nofollow noopener\" target=\"_blank\">for their most capable models<\/a>? <br \/>It is obvious to say that this release was the most impactful one for this year.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/qwen-3-the-new-open-standard\" rel=\"nofollow noopener\" target=\"_blank\">Qwen 3<\/a>: It might be unfair to put a whole model family in the same ranks as other models in this list. Qwen3 covers everything: From <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3\" rel=\"nofollow noopener\" target=\"_blank\">general models<\/a> in all sizes and forms (both dense and MoE), to <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3-vl\" rel=\"nofollow noopener\" target=\"_blank\">vision<\/a> and <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3-omni\" rel=\"nofollow noopener\" target=\"_blank\">omni<\/a>, <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3-coder\" rel=\"nofollow noopener\" target=\"_blank\">coding<\/a>, <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3-embedding\" rel=\"nofollow noopener\" target=\"_blank\">embedding<\/a> and <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3-reranker\" rel=\"nofollow noopener\" target=\"_blank\">reranker<\/a>, cause why wouldn\u2019t they? <\/p>\n<p>While Qwen2.5 was mostly known as an insider tip and heavily used by academia, Qwen3 is regarded as the choice for a lot of problems, especially in terms of multilinguality. It therefore is no wonder that a lot of academic experiments are conducted on Qwen-based models, <a href=\"https:\/\/www.interconnects.ai\/p\/reinforcement-learning-with-random\" rel=\"nofollow noopener\" target=\"_blank\">which might have consequences in terms of reproducibility on other models.<\/a> By now, Qwen has overtaken Llama in terms of total downloads and as the most-used base model to fine-tune (for more download data, see <a href=\"https:\/\/www.atomproject.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">The ATOM Project<\/a>).<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/kimi-k2-and-when-deepseek-moments\" rel=\"nofollow noopener\" target=\"_blank\">Kimi K2<\/a>: Moonshot AI is a laser-focused lab similar to DeepSeek: They work on one model line at a time, while running experiments on smaller models which will eventually feed back into their main model line for the next generation. This therefore makes it easy to guess what the next model will look like. Kimi K2 was (and is) a model loved by many, for both its sheer performance and its distinct writing style.<\/p>\n<p data-attrs=\"{&quot;url&quot;:&quot;https:\/\/www.interconnects.ai\/p\/2025-open-models-year-in-review?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\" class=\"button-wrapper\"><a href=\"https:\/\/www.interconnects.ai\/p\/2025-open-models-year-in-review?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share\" rel=\"nofollow noopener\" class=\"button primary\" target=\"_blank\">Share<\/a><\/p>\n<p>Model releases that are very solid and deservedly well-known in the open model space.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/i\/179633798\/our-picks\" rel=\"nofollow noopener\" target=\"_blank\">MiniMax M2<\/a>: MiniMax M2 was a surprising release this year. While MiniMax didn\u2019t come from nowhere and we\u2019ve been watching every release from them, the leap from the rather mediocre M1 to the very capable M2 is nothing short of remarkable. Minimax also executed the <a href=\"https:\/\/open.substack.com\/pub\/robotic\/p\/latest-open-artifacts-16-whos-building?r=ozvld&amp;selection=dc458e60-b7d2-4465-a022-a9e12eb55103\" rel=\"nofollow noopener\" target=\"_blank\">(Chinese) model release playbook<\/a> perfectly, leading to lasting usage even after the free period ended, with M2 continuing to be one of the most-used models on OpenRouter.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/i\/170685919\/our-picks\" rel=\"nofollow noopener\" target=\"_blank\">GLM-4.5<\/a>: The story of Zhipu feels similar to Moonshot: A team which is laser-focused on one model line and one goal, and continues to develop their models with rigor, followed by them getting more attention with one model release. That model release was Kimi K2 for Moonshot and GLM-4.5 for Zhipu. We also chose 4.5 over 4.6 because it was their breakthrough moment and has the beloved and smaller <a href=\"https:\/\/huggingface.co\/zai-org\/GLM-4.5-Air\" rel=\"nofollow noopener\" target=\"_blank\">Air<\/a> version, which will be released for 4.6 in the near future.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/gpt-oss-openai-validates-the-open\" rel=\"nofollow noopener\" target=\"_blank\">GPT-OSS<\/a>: The long-awaited open model release by OpenAI. Flexing its muscles with sheer performance, this model is the driving force behind many agentic apps, in which it shines. Being weak in general world knowledge and multilingual, GPT-OSS must be used in very specific settings and setups, where it then outshines alternatives. It also pioneered different (low\/medium\/high) thinking levels, similar to its big closed-source brothers, something we might see adopted by other (open) models in the future.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/gemma-3-olmo-2-32b-and-the-growing\" rel=\"nofollow noopener\" target=\"_blank\">Gemma 3<\/a>: Gemma 3 is beloved for two reasons: Its strong multilingual abilities, especially at the &lt;30B size range, and its vision capabilities. The latter is something the Western open model space is severely lacking in terms of strong options aside from Gemma and Moondream. Hopefully, this might change in the coming year!<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/olmo-3-americas-truly-open-reasoning\" rel=\"nofollow noopener\" target=\"_blank\">Olmo 3<\/a>: As is it has for the last few years, Ai2 (where Nathan works) delivered another update to the best models with all data, code, weights, logs, and methods released. These are crucial for researchers who cannot understand leading models without releases like this. Where the industry has shifted to MoEs for peak performance, and Ai2 will too, these models at 7 and 32B scales of dense transformers is crucial for accessibility of finetuning \u2014 a niche that is actually underserved by the model makers after Llama\u2019s downfall and Qwen withholding some base models.<\/p>\n<p>Models that dominate or re-define a certain niche.<\/p>\n<p><a href=\"https:\/\/open.substack.com\/pub\/robotic\/p\/latest-open-artifacts-14-nvidias?r=ozvld&amp;selection=855cd2a6-8cf7-46d1-bd05-79f2ef707afd&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff&amp;bgImage=true\" rel=\"nofollow noopener\" target=\"_blank\">Parakeet 3<\/a>: I (Florian) cannot speak highly enough of this speech-to-text model. It completely transformed how I work and interact with my computer. Speaking with your computer is awkward at first but becomes natural quickly. It also is a huge boost to (Claude) coding-based workflows if you can just waffle on for paragraphs to explain your problem compared to lazily writing a few sentences. <\/p>\n<p>It is almost boring to see how well this model works while being blazingly fast on a MacBook, beating out every cloud-based platform in terms of end-to-end latency. It is such a good model that a lot of apps with \u201cWhisper\u201d in its name are switching to this model as the main engine (something we have seen time and time again \u2014 r\/LocalLlama is not about Llama anymore, nor is r\/StableDiffusion about Stable Diffusion these days). Parakeet 3 adds a whole new selection of languages, including German, which I happily use. Whisper has support for more languages, at least for now. Oh, did I mention that the majority of data is open as well?<\/p>\n<p><a href=\"https:\/\/open.substack.com\/pub\/robotic\/p\/latest-open-artifacts-14-nvidias?r=ozvld&amp;selection=695a66c4-5c24-4aa8-886c-a29f09f28276&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff&amp;bgImage=true\" rel=\"nofollow noopener\" target=\"_blank\">Nemotron 2<\/a>: NVIDIA, the second: They are  also in the open model LLM business (well, and also <a href=\"https:\/\/huggingface.co\/nvidia\/GR00T-N1.5-3B\" rel=\"nofollow noopener\" target=\"_blank\">VLAs<\/a>, <a href=\"https:\/\/huggingface.co\/collections\/nvidia\/reward-models-10-2025\" rel=\"nofollow noopener\" target=\"_blank\">reward models<\/a>, <a href=\"https:\/\/huggingface.co\/collections\/nvidia\/clara-biology\" rel=\"nofollow noopener\" target=\"_blank\">biology<\/a>, <a href=\"https:\/\/huggingface.co\/collections\/nvidia\/lyra\" rel=\"nofollow noopener\" target=\"_blank\">gaussian splatting<\/a>, <a href=\"https:\/\/huggingface.co\/collections\/nvidia\/gen3c\" rel=\"nofollow noopener\" target=\"_blank\">video generation<\/a>, and and and). Aside from them pruning and post-training other <a href=\"https:\/\/huggingface.co\/nvidia\/Llama-3_3-Nemotron-Super-49B-v1_5\" rel=\"nofollow noopener\" target=\"_blank\">models<\/a>, they are training their own models under the Nemotron brand. Similar to Parakeet, the vast majority of data is released openly. Their models are mamba2-transformer hybrids, which improves the speed, especially at long contexts, compared to transformer-only models.<\/p>\n<p><a href=\"https:\/\/open.substack.com\/pub\/robotic\/p\/latest-open-models-15-its-qwens-world?r=ozvld&amp;selection=f244d3e8-e5fd-446b-8f01-3c0b43cd8662&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff&amp;bgImage=true\" rel=\"nofollow noopener\" target=\"_blank\">Moondream 3<\/a>: Widely regarded as THE player in the vision space, the Moondream team puts a lot of care into their model releases, giving even closed models like GPT or Gemini a run for its money. Those deep in the vision space know that, those who aren\u2019t should know. Try the model!<\/p>\n<p><a href=\"https:\/\/open.substack.com\/pub\/robotic\/p\/latest-open-models-15-its-qwens-world?r=ozvld&amp;selection=b89a2edb-6142-483d-8ccb-b159714659ba&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff&amp;bgImage=true\" rel=\"nofollow noopener\" target=\"_blank\">Granite 4<\/a>: The IBM team puts out rock-solid (pun intended) releases one after the other, yet are unable to get the attention they deserve. Togglable thinking per prompt was debuted by <a href=\"https:\/\/huggingface.co\/ibm-granite\/granite-3.2-8b-instruct-preview\" rel=\"nofollow noopener\" target=\"_blank\">Granite 3.2<\/a>, for example. And while this seemed to be a short-lived phase, as the open model space is switching back to releasing reasoning and instruct models separately, it shows that IBMs LLM efforts are worth to be watched. With its fourth iteration, IBM adapts the mamba-attention architecture and also releases MoEs. Even more important: They are we scaling up the model sizes! The writing style is also distinctly non-sloptimized, which is regreshing in this day and age.<\/p>\n<p><a href=\"https:\/\/open.substack.com\/pub\/robotic\/p\/latest-open-artifacts-12-qwen3-235b-a22b-instruct-2507?r=ozvld&amp;selection=7b6a371d-dafb-48c7-ae89-57b342f36bfe&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff&amp;bgImage=true\" rel=\"nofollow noopener\" target=\"_blank\">SmolLM3<\/a>: A tiny, yet capable model for its 3B size. All the data is open, as well as intermediate checkpoints. Aside from <a href=\"https:\/\/huggingface.co\/blog\/smollm3\" rel=\"nofollow noopener\" target=\"_blank\">the great initial blog<\/a>, the HF team has also released other <a href=\"https:\/\/huggingface.co\/spaces\/HuggingFaceTB\/smol-training-playbook\" rel=\"nofollow noopener\" target=\"_blank\">resources<\/a> which deeper into the training. If you are in the need for a great on-device model, chances are that SmolLM3 is a perfect fit!<\/p>\n<p>Edit: Added the tier list in text. Added Cohere, ServiceNow, Motif Technologies, and TNG Group to the tier list.<a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!ykeT!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6692cb58-3e23-47e1-a2e9-378f9a91d02c_1946x974.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/6692cb58-3e23-47e1-a2e9-378f9a91d02c_1946.jpeg\" width=\"1456\" height=\"729\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/6692cb58-3e23-47e1-a2e9-378f9a91d02c_1946x974.png&quot;,&quot;srcNoWatermark&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/51281832-b837-40dd-b664-2975823a52e6_1946x974.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:729,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:823981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/www.interconnects.ai\/i\/181259397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51281832-b837-40dd-b664-2975823a52e6_1946x974.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>We have more requests than imagined to update our <a href=\"https:\/\/www.interconnects.ai\/p\/chinas-top-19-open-model-labs\" rel=\"nofollow noopener\" target=\"_blank\">tier list<\/a>, which covered the Chinese ecosystem and to extend it with Western orgs, which we\u2019ve covered <a href=\"https:\/\/www.interconnects.ai\/p\/latest-open-artifacts-16-whos-building\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>. We have added a specialist tier which contains the organizations that trained few models or are specializing in a certain niche, e.g. small, on-device models (Liquid, HuggingFace). <\/p>\n<p>The organizations are as follows.<\/p>\n<p>Frontier: DeepSeek, Qwen, Moonshot AI (Kimi)<\/p>\n<p>Close competitors: Zhipu (Z.Ai), Minimax<\/p>\n<p>Noteworthy: StepFun, InclusionAI \/ Ant Ling, Meituan Longcat, Tencent, IBM, NVIDIA, Google, Mistral<\/p>\n<p>Specialists: OpenAI, Ai2, Moondream, Arcee, RedNote, HuggingFace, LiquidAI, Microsoft, Xiaomi, Mohamed bin Zayed University of Artificial Intelligence<\/p>\n<p>On the rise: ByteDance Seed, Apertus, OpenBMB, Motif, Baidu, Marin Community, InternLM, OpenGVLab, ServiceNow, Skywork<\/p>\n<p>Honorable mentions: TNG Group, Meta, Cohere, Beijing Academy of Artificial Intelligence, Multimodal Art Projection, Huawei<\/p>\n<p>Some notes:<\/p>\n<p>A lot of the orgs in Noteworthy can reach a higher tier by scaling up their current recipe. This tier also includes model makers who train a lot of models, often for different modalities.<\/p>\n<p>Meituan Longcat (China\u2019s DoorDash equivalent) is a new addition to the tier list, their models are recurring guests in the artifacts series.<\/p>\n<p>Meta was weird to place, given that there are a <a href=\"https:\/\/www.cnbc.com\/2025\/12\/09\/meta-avocado-ai-strategy-issues.html\" rel=\"nofollow noopener\" target=\"_blank\">lot of reports<\/a> that they will release proprietary models in the future. The future of Llama is uncertain.<\/p>\n<p>ByteDance Seed\u2019s papers show that they are a strong research organization, which yet has to be reflected in their open model releases. <a href=\"https:\/\/huggingface.co\/ByteDance-Seed\/Seed-OSS-36B-Instruct\" rel=\"nofollow noopener\" target=\"_blank\">Seed-OSS 36B<\/a> is their first capable LLM, while their other releases, such as <a href=\"https:\/\/huggingface.co\/ByteDance-Seed\/AHN-Mamba2-for-Qwen-2.5-Instruct-3B\" rel=\"nofollow noopener\" target=\"_blank\">AHN-Mamba2-for-Qwen-2.5-Instruct-3B<\/a> are mostly research artifacts.<\/p>\n<p>If you want to take your own stab at the tier list, you can do so by using <a href=\"https:\/\/tiermaker.com\/create\/chinese-model-makers-18511783\" rel=\"nofollow noopener\" target=\"_blank\">this link<\/a>!<\/p>\n<p data-attrs=\"{&quot;url&quot;:&quot;https:\/\/www.interconnects.ai\/p\/2025-open-models-year-in-review\/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\" class=\"button-wrapper\"><a href=\"https:\/\/www.interconnects.ai\/p\/2025-open-models-year-in-review\/comments\" rel=\"nofollow noopener\" class=\"button primary\" target=\"_blank\">Leave a comment<\/a><\/p>\n<p>2025 was a seminal year in open models, where open model deployments became a real possibility. It is still well accepted that the best closed models have a robustness and richness that open models matching them on benchmarks don\u2019t always have, but the potential of trying open models has never been higher. This leaves us at the point where open models are established, so where do they go next? <\/p>\n<p>In 2026, we expect the major talking points of open models to follow:<\/p>\n","protected":false},"excerpt":{"rendered":"Welcome to the first Artifacts Recap, where we highlight the most notable and impactful open model releases of&hellip;\n","protected":false},"author":2,"featured_media":185131,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-185130","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/185130","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=185130"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/185130\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/185131"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=185130"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=185130"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=185130"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}