{"id":173300,"date":"2025-12-03T18:28:09","date_gmt":"2025-12-03T18:28:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/173300\/"},"modified":"2025-12-03T18:28:09","modified_gmt":"2025-12-03T18:28:09","slug":"mixture-of-experts-powers-the-most-intelligent-frontier-models","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/173300\/","title":{"rendered":"Mixture of Experts Powers the Most Intelligent Frontier Models"},"content":{"rendered":"<p>The top 10 most intelligent open-source models all use a mixture-of-experts architecture.<br \/>\nKimi K2 Thinking, DeepSeek-R1, Mistral Large 3 and others run 10x faster on NVIDIA GB200 NVL72.<\/p>\n<p>A look under the hood of virtually any frontier model today will reveal a <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/mixture-of-experts\/\" rel=\"nofollow noopener\">mixture-of-experts<\/a> (MoE) model architecture that mimics the efficiency of the human brain.<\/p>\n<p>Just as the brain activates specific regions based on the task, MoE models divide work among specialized \u201cexperts,\u201d activating only the relevant ones for every <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-tokens-explained\/\" rel=\"nofollow noopener\" target=\"_blank\">AI token<\/a>. This results in faster, more efficient token generation without a proportional increase in compute.<\/p>\n<p>The industry has already recognized this advantage. On the independent <a target=\"_blank\" href=\"https:\/\/artificialanalysis.ai\/models\/open-source\" rel=\"nofollow noopener\">Artificial Analysis (AA) leaderboard<\/a>, the top 10 most intelligent open-source models use an MoE architecture, including DeepSeek AI\u2019s DeepSeek-R1, Moonshot AI\u2019s Kimi K2 Thinking, OpenAI\u2019s gpt-oss-120B and Mistral AI\u2019s Mistral Large 3.<\/p>\n<p>However, scaling MoE models in production while delivering high performance is notoriously difficult. The extreme codesign of <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/gb200-nvl72\/\" rel=\"nofollow noopener\">NVIDIA GB200 NVL72<\/a> systems combines hardware and software optimizations for maximum performance and efficiency, making it practical and straightforward to scale MoE models.<\/p>\n<p>The Kimi K2 Thinking MoE model \u2014 ranked as the most intelligent open-source model on the AA leaderboard \u2014 sees a 10x performance leap on the NVIDIA GB200 NVL72 rack-scale system compared with NVIDIA HGX H200. Building on the performance delivered for the <a href=\"https:\/\/blogs.nvidia.com\/blog\/blackwell-inferencemax-benchmark-results\/\" rel=\"nofollow noopener\" target=\"_blank\">DeepSeek-R1<\/a> and Mistral Large 3 MoE models, this breakthrough underscores how MoE is becoming the architecture of choice for frontier models \u2014 and why NVIDIA\u2019s full-stack inference platform is the key to unlocking its full potential.<\/p>\n<p>What Is MoE, and Why Has It Become the Standard for Frontier Models?<\/p>\n<p>Until recently, the industry standard for building smarter AI was simply building bigger, dense models that use all of their model parameters \u2014 often hundreds of billions for today\u2019s most capable models \u2014 to generate every token. While powerful, this approach requires immense computing power and energy, making it challenging to scale.<\/p>\n<p>Much like the human brain relies on specific regions to handle different cognitive tasks \u2014 whether processing language, recognizing objects or solving a math problem \u2014 MoE models comprise several specialized \u201cexperts.\u201d For any given token, only the most relevant ones are activated by a router. This design means that even though the overall model may contain hundreds of billions of parameters, generating a token involves using only a small subset \u2014 often just tens of billions.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-87988 size-large\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/mixture-of-experts-video-1680x840.png\" alt=\"A diagram titled 'Mixture of Experts' illustrating AI architecture. A stylized brain network sits between an 'Input' data icon and an 'Output' lightbulb icon. Inside the brain, specific nodes are highlighted with lightning bolt symbols, visually demonstrating how only relevant 'experts' are activated to generate every token rather than the entire network. \" width=\"1680\" height=\"840\"  \/>Like the human brain uses specific regions for different tasks, <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/shorts\/TlmSpAvYwYI\" rel=\"nofollow noopener\">mixture-of-experts models<\/a> use a router to select only the most relevant experts to generate every token.<\/p>\n<p>By selectively engaging only the experts that matter most, MoE models achieve higher intelligence and adaptability without a matching rise in computational cost. This makes them the foundation for efficient AI systems optimized for performance per dollar and per watt \u2014 generating significantly more intelligence for every unit of energy and capital invested.<\/p>\n<p>Given these advantages, it is no surprise that MoE has rapidly become the architecture of choice for frontier models, adopted by over 60% of open-source AI model releases this year. Since early 2023, it\u2019s enabled a nearly 70x increase in model intelligence \u2014 pushing the limits of AI capability.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-88030 size-medium\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/MoETrendVisual-e1764777501331-960x714.png\" alt=\"\" width=\"960\" height=\"714\"  \/><\/p>\n<p>\u201cOur pioneering work with OSS mixture-of-experts architecture, starting with Mixtral 8x7B two years ago, ensures advanced intelligence is both accessible and sustainable for a broad range of applications,\u201d said Guillaume Lample, cofounder and chief scientist at Mistral AI. \u201cMistral Large 3\u2019s MoE architecture enables us to scale AI systems to greater performance and efficiency while dramatically lowering energy and compute demands.\u201d<\/p>\n<p>Overcoming MoE Scaling Bottlenecks With Extreme Codesign<\/p>\n<p>Frontier MoE models are simply too large and complex to be deployed on a single GPU. To run them, experts must be distributed across multiple GPUs, a technique called expert parallelism. Even on powerful platforms such as the NVIDIA H200, deploying MoE models involves bottlenecks such as:<\/p>\n<p>Memory limitations: For each token, GPUs must dynamically load the selected experts\u2019 parameters from high-bandwidth memory, causing frequent heavy pressure on memory bandwidth.<br \/>\nLatency: Experts must execute a near-instantaneous all-to-all communication pattern to exchange information and form a final, complete answer. However, on H200, spreading experts across more than eight GPUs requires them to communicate over higher-latency scale-out networking, limiting the benefits of expert parallelism.<\/p>\n<p>The solution: extreme codesign.<\/p>\n<p>NVIDIA GB200 NVL72 is a rack-scale system with 72 NVIDIA Blackwell GPUs working together as if they were one, delivering 1.4 exaflops of AI performance and 30TB of fast shared memory. The 72 GPUs are connected using NVLink Switch into a single, massive NVLink interconnect fabric, which allows every GPU to communicate with each other with 130 TB\/s of NVLink connectivity.<\/p>\n<p>MoE models can tap into this design to scale expert parallelism far beyond previous limits \u2014 distributing the experts across a much larger set of up to 72 GPUs.<\/p>\n<p>This architectural approach directly resolves MoE scaling bottlenecks by:<\/p>\n<p>Reducing the number of experts per GPU: Distributing experts across up to 72 GPUs reduces the number of experts per GPU, minimizing parameter-loading pressure on each GPU\u2019s high-bandwidth memory. Fewer experts per GPU also frees up memory space, allowing each GPU to serve more concurrent users and support longer input lengths.<br \/>\nAccelerating expert communication: Experts spread across GPUs can communicate with each other instantly using NVLink. The NVLink Switch also has the compute power needed to perform some of the calculations required to combine information from various experts, speeding up delivery of the final answer.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-87985 size-large\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/extreme-codesign-moe-1680x945.png\" alt=\"\" width=\"1680\" height=\"945\"  \/><\/p>\n<p>Other full-stack optimizations also play a key role in unlocking high inference performance for MoE models. The <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/dynamo\" rel=\"nofollow noopener\">NVIDIA Dynamo<\/a> framework orchestrates <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/disaggregated-serving\/\" rel=\"nofollow noopener\">disaggregated serving<\/a> by assigning prefill and decode tasks to different GPUs, allowing decode to run with large expert parallelism, while prefill uses parallelism techniques better suited to its workload. The <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference\/\" rel=\"nofollow noopener\">NVFP4<\/a> format helps maintain accuracy while further boosting performance and efficiency.<\/p>\n<p>Open-source inference frameworks such as NVIDIA TensorRT-LLM, SGLang and vLLM support these optimizations for MoE models. SGLang, in particular, has played a significant role in <a target=\"_blank\" href=\"https:\/\/lmsys.org\/blog\/2025-09-25-gb200-part-2\/\" rel=\"nofollow noopener\">advancing large-scale MoE on GB200 NVL72<\/a>, helping validate and mature many of the techniques used today.<\/p>\n<p>To bring this performance to enterprises worldwide, the GB200 NVL72 is being deployed by\u00a0 major cloud service providers and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/gpu-cloud-computing\/partners\/\" rel=\"nofollow noopener\">NVIDIA Cloud Partners<\/a> including Amazon Web Services, Core42, CoreWeave, Crusoe, Google Cloud, Lambda, Microsoft Azure, Nebius, Nscale, Oracle Cloud Infrastructure, Together AI and others.<\/p>\n<p>\u201cAt CoreWeave, our customers are leveraging our platform to put mixture-of-experts models into production as they build agentic workflows,\u201d said Peter Salanki, cofounder and chief technology officer at CoreWeave. \u201cBy working closely with NVIDIA, we are able to deliver a tightly integrated platform that brings MoE performance, scalability and reliability together in one place. You can only do that on a cloud purpose-built for AI.\u201d<\/p>\n<p>Customers such as DeepL are using Blackwell NVL72 rack-scale design to build and deploy their next-generation AI models.<\/p>\n<p>\u201cDeepL is leveraging NVIDIA GB200 hardware to train mixture-of-experts models, advancing its model architecture to improve efficiency during training and inference, setting new benchmarks for performance in AI,\u201d said Paul Busch, research team lead at DeepL.<\/p>\n<p>The Proof Is in the Performance Per Watt<\/p>\n<p>NVIDIA GB200 NVL72 efficiently scales complex MoE models and delivers a 10x leap in performance per watt. This performance leap isn\u2019t just a benchmark; it enables 10x the token revenue, transforming the economics of AI at scale in power- and cost-constrained data centers.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-88039 size-large\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/DSR1-10X-MOE-BLOG-FINAL-1680x900.jpg\" alt=\"A bubble chart titled \u201cToday\u2019s Leading Frontier Models are Built on MoE\u201d plots model releases from January 2023 to today on the x-axis and model intelligence on the y-axis. Each model appears as a bubble sized by parameter count, with green representing mixture of experts (MoE) and gray representing dense architectures. Early years show mostly small, low-intelligence dense models, but a dashed vertical line labeled \u201cStart of MoE Era\u201d in early 2025 marks a shift: The right side of the chart is dominated by large green MoE bubbles such as Qwen 3 325B, Kimi-K2, Hermes 4 405B and Llama 4 Maverick, clustered higher on the intelligence axis. A legend distinguishes MoE from dense models, and a scale key illustrates bubble sizes ranging from 8 billion to 1 trillion parameters. The chart conveys that MoE models now lead frontier AI development.\" width=\"1680\" height=\"900\"  \/>Since early 2025, nearly all leading frontier models use MoE designs.<\/p>\n<p>At NVIDIA GTC Washington, D.C., NVIDIA founder and CEO Jensen Huang highlighted how GB200 NVL72 delivers 10x the performance of NVIDIA Hopper for DeepSeek-R1, and this performance extends to other DeepSeek variants as well.<\/p>\n<p>\u201cWith GB200 NVL72 and Together AI\u2019s custom optimizations, we are exceeding customer expectations for large-scale inference workloads for MoE models like DeepSeek-V3,\u201d said Vipul Ved Prakash, cofounder and CEO of Together AI. \u201cThe performance gains come from NVIDIA\u2019s full-stack optimizations coupled with Together AI Inference breakthroughs across kernels, runtime engine and speculative decoding.\u201d<\/p>\n<p>This performance advantage is evident across other frontier models.<\/p>\n<p>Kimi K2 Thinking, the most intelligent open-source model, serves as another proof point, achieving 10x better generational performance when deployed on GB200 NVL72.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-88036 size-large\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/KIMI-K2-10X-MOE-BLOG-FINAL-1680x895.jpg\" alt=\"\" width=\"1680\" height=\"895\"  \/><\/p>\n<p>Fireworks AI has currently deployed Kimi K2 on the NVIDIA B200 platform to achieve the <a target=\"_blank\" href=\"https:\/\/artificialanalysis.ai\/models\/kimi-k2-thinking\/providers\" rel=\"nofollow noopener\">highest performance on the Artificial Analysis leaderboard<\/a>.<\/p>\n<p>\u201cNVIDIA GB200 NVL72 rack-scale design makes MoE model serving dramatically more efficient,\u201d said Lin Qiao, cofounder and CEO of Fireworks AI. \u201cLooking ahead, NVL72 has the potential to transform how we serve massive MoE models, delivering major performance improvements over the Hopper platform and setting a new bar for frontier model speed and efficiency.\u201d<\/p>\n<p>Mistral Large 3 also achieved a 10x performance gain on the GB200 NVL72 compared with the prior-generation H200. This generational gain translates into better user experience, lower per-token cost and higher energy efficiency for this new MoE model.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-88033 size-large\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/12\/ML3-10X-MOE-BLOG-FINAL-1680x895.jpg\" alt=\"\" width=\"1680\" height=\"895\"  \/><\/p>\n<p>Powering Intelligence at Scale<\/p>\n<p>The NVIDIA GB200 NVL72 rack-scale system is designed to deliver strong performance beyond MoE models.<\/p>\n<p>The reason becomes clear when taking a look at where AI is heading: the newest generation of multimodal AI models have specialized components for language, vision, audio and other modalities, activating only the ones relevant to the task at hand.<\/p>\n<p>In agentic systems, different \u201cagents\u201d specialize in planning, perception, reasoning, tool use or search, and an orchestrator coordinates them to deliver a single outcome. In both cases, the core pattern mirrors MoE: route each part of the problem to the most relevant experts, then coordinate their outputs to produce the final outcome.<\/p>\n<p>Extending this principle to production environments where multiple applications and agents serve multiple users unlocks new levels of efficiency. Instead of duplicating massive AI models for every agent or application, this approach can enable a shared pool of experts accessible to all, with each request routed to the right expert.<\/p>\n<p>Mixture of experts is a powerful architecture moving the industry toward a future where massive capability, efficiency and scale coexist. The GB200 NVL72 unlocks this potential today, and NVIDIA\u2019s roadmap with the NVIDIA Vera Rubin architecture will continue to expand the horizons of frontier models.<\/p>\n<p>Learn more about how GB200 NVL72 scales complex MoE models in this <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/scaling-large-moe-models-with-wide-expert-parallelism-on-nvl72-rack-scale-systems\/\" rel=\"nofollow noopener\">technical deep dive<\/a>.<\/p>\n<p>This post is part of <a href=\"https:\/\/blogs.nvidia.com\/blog\/tag\/think-smart\" rel=\"nofollow noopener\" target=\"_blank\">Think SMART<\/a>, a series focused on how leading AI service providers, developers and enterprises can boost their <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/deep-learning-performance-training-inference\/ai-inference\" rel=\"nofollow noopener\">inference performance<\/a> and return on investment with the latest advancements from NVIDIA\u2019s full-stack <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/solutions\/ai\/inference\/\" rel=\"nofollow noopener\">inference platform<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"The top 10 most intelligent open-source models all use a mixture-of-experts architecture. Kimi K2 Thinking, DeepSeek-R1, Mistral Large&hellip;\n","protected":false},"author":2,"featured_media":173301,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-173300","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/173300","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=173300"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/173300\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/173301"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=173300"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=173300"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=173300"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}