{"id":470300,"date":"2026-02-15T15:16:28","date_gmt":"2026-02-15T15:16:28","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/470300\/"},"modified":"2026-02-15T15:16:28","modified_gmt":"2026-02-15T15:16:28","slug":"leading-inference-providers-cut-ai-costs-by-up-to-10x-with-open-source-models-on-nvidia-blackwell","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/470300\/","title":{"rendered":"Leading Inference Providers Cut AI Costs by up to 10x With Open Source Models on NVIDIA Blackwell"},"content":{"rendered":"<p>A diagnostic insight in healthcare. A character\u2019s dialogue in an interactive game. An autonomous resolution from a customer service agent. Each of these AI-powered interactions is built on the same unit of intelligence: a <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-tokens-explained\/\" rel=\"nofollow noopener\" target=\"_blank\">token<\/a>.<\/p>\n<p>Scaling these AI interactions requires businesses to consider whether they can afford more tokens. The answer lies in better tokenomics \u2014 which at its core is about driving down the cost of each token. This downward trend is unfolding across industries. Recent <a target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2511.23455\" rel=\"nofollow noopener\">MIT research<\/a> found that infrastructure and algorithmic efficiencies are reducing inference costs for frontier-level performance by up to 10x annually.<\/p>\n<p>To understand how infrastructure efficiency improves tokenomics, consider the analogy of a high-speed printing press. If the press produces 10x output with incremental investment in ink, energy and the machine itself, the cost to print each individual page drops. In the same way, investments in AI infrastructure can lead to far greater token output compared with the increase in cost \u2014 causing a meaningful reduction in the cost per token.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-89810 size-medium\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/02\/inference-moe-tokenomics-diagram_dgm2-r3-1280x680-1-960x510.png\" alt=\"\" width=\"960\" height=\"510\"  \/>When token output outpaces infrastructure cost, the cost of each token drops.<\/p>\n<p>That\u2019s why leading <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/ai-inference\/\" rel=\"nofollow noopener\">inference<\/a> providers including Baseten, DeepInfra, Fireworks AI and Together AI are using the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/technologies\/blackwell-architecture\/\" rel=\"nofollow noopener\">NVIDIA Blackwell platform<\/a>, which helps them reduce cost per token by up to 10x compared with the NVIDIA Hopper platform.<\/p>\n<p>These providers host advanced open source models, which have now reached frontier-level intelligence. By combining open source frontier intelligence, the extreme hardware-software codesign of NVIDIA Blackwell and their own optimized inference stacks, these providers are enabling dramatic token cost reductions for businesses across every industry.<\/p>\n<p>Healthcare \u2014 Baseten and Sully.ai Cut AI Inference Costs by 10x<\/p>\n<p>In healthcare, tedious, time-consuming tasks like medical coding, documentation and managing insurance forms cut into the time doctors can spend with patients.<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/sully.ai\" rel=\"nofollow noopener\">Sully.ai<\/a> helps solve this problem by developing \u201cAI employees\u201d that can handle routine tasks like medical coding and note-taking. As the company\u2019s platform scaled, its proprietary, closed source models created three bottlenecks: unpredictable latency in real-time clinical workflows, inference costs that scaled faster than revenue and insufficient control over model quality and updates.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-89822\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/02\/sullai-baseten-960x510.png\" alt=\"\" width=\"960\" height=\"510\"  \/>Sully.ai builds AI employees that handle routine tasks for physicians.<\/p>\n<p>To overcome these bottlenecks, <a target=\"_blank\" href=\"https:\/\/www.baseten.co\/resources\/customers\/sully-ai-returns-30m-clinical-minutes-using-open-source\/\" rel=\"nofollow noopener\">Sully.ai uses Baseten\u2019s Model API<\/a>, which deploys open source models such as gpt-oss-120b on NVIDIA Blackwell GPUs. Baseten used the low-precision <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference\/\" rel=\"nofollow noopener\">NVFP4 <\/a>data format, the NVIDIA TensorRT-LLM library and the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/dynamo\" rel=\"nofollow noopener\">NVIDIA Dynamo<\/a> inference framework to deliver optimized inference. The company chose NVIDIA Blackwell to run its Model API after seeing up to 2.5x better throughput per dollar compared with the NVIDIA Hopper platform.<\/p>\n<p>As a result, <a target=\"_blank\" href=\"https:\/\/sully.ai\" rel=\"nofollow noopener\">Sully.ai<\/a>\u2019s inference costs dropped by 90%, representing a 10x reduction compared with the prior closed source implementation, while response times improved by 65% for critical workflows like generating medical notes. The company has now returned over 30 million minutes to physicians, time previously lost to data entry and other manual tasks.<\/p>\n<p>Gaming \u2014 DeepInfra and Latitude Reduce Cost per Token by 4x<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/latitude.io\/\" rel=\"nofollow noopener\">Latitude<\/a> is building the future of AI-native gaming with its <a target=\"_blank\" href=\"https:\/\/aidungeon.com\/\" rel=\"nofollow noopener\">AI Dungeon<\/a> adventure-story game and\u00a0 upcoming AI-powered role-playing gaming platform, Voyage, where players can create or play worlds with the freedom to choose any action and make their own story.<\/p>\n<p>The company\u2019s platform uses large language models to respond to players\u2019 actions \u2014 but this comes with scaling challenges, as every player action triggers an inference request. Costs scale with engagement, and response times must stay fast enough to keep the experience seamless.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-89819\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/02\/latitude-deepinfra-960x510.png\" alt=\"\" width=\"960\" height=\"510\"  \/>Latitude has built a text-based adventure-story game called \u201cAI Dungeon,\u201d which generates both narrative text and imagery in real time as players explore dynamic stories.<\/p>\n<p>Latitude runs large open source models on <a target=\"_blank\" href=\"https:\/\/deepinfra.com\/blog\/nvidia-blackwell-efficient-ai-inference\" rel=\"nofollow noopener\">DeepInfra\u2019s inference platform, powered by NVIDIA Blackwell GPUs and TensorRT-LLM<\/a>. For a large-scale <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/mixture-of-experts\/\" rel=\"nofollow noopener\">mixture-of-experts<\/a> (MoE) model, DeepInfra reduced the cost per million tokens from 20 cents on the NVIDIA Hopper platform to 10 cents on Blackwell. Moving to Blackwell\u2019s native low-precision NVFP4 format further cut that cost to just 5 cents \u2014 for a total 4x improvement in cost per token \u2014 while maintaining the accuracy that customers expect.<\/p>\n<p>Running these large-scale MoE models on DeepInfra\u2019s Blackwell-powered platform allows Latitude to deliver fast, reliable responses cost effectively. DeepInfra inference platform delivers this performance while reliably handling traffic spikes, letting Latitude deploy more capable models without compromising player experience.<\/p>\n<p>Agentic Chat \u2014 Fireworks AI and Sentient Foundation Lower AI Costs by up to 50%<\/p>\n<p>Sentient Labs is focused on bringing AI developers together to build powerful reasoning AI systems that are all open source. The goal is to accelerate AI toward solving harder reasoning problems through research in secure autonomy, agentic architecture and continual learning.<\/p>\n<p>Its first app, Sentient Chat, orchestrates complex multi-agent workflows and integrates more than a dozen specialized AI agents from the community. Due to this, Sentient Chat has massive compute demands because a single user query could trigger a cascade of autonomous interactions that typically lead to costly infrastructure overhead.<\/p>\n<p>To manage this scale and complexity, <a target=\"_blank\" href=\"https:\/\/fireworks.ai\/blog\/Story-Sentient\" rel=\"nofollow noopener\">Sentient uses Fireworks AI\u2019s inference platform running on NVIDIA Blackwell<\/a>. With Fireworks\u2019 Blackwell-optimized inference stack, Sentient achieved 25-50% better cost efficiency compared with its previous Hopper-based deployment.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-89816 size-medium\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/02\/sentient-fireworksai-960x510.png\" alt=\"\" width=\"960\" height=\"510\"  \/>Sentient Chat orchestrates complex multi-agent workflows and integrates more than a dozen specialized AI agents from the community.<\/p>\n<p>This higher throughput per GPU allowed the company to serve significantly more concurrent users for the same cost. The platform\u2019s scalability supported a viral launch of 1.8 million waitlisted users in 24 hours and processed 5.6 million queries in a single week while delivering consistent low latency.<\/p>\n<p>Customer Service \u2014 Together AI and Decagon Drive Down Cost by 6x<\/p>\n<p>Customer service calls with voice AI often end in frustration because even a slight delay can lead users to talk over the agent, hang up or lose trust.<\/p>\n<p>Decagon builds AI agents for enterprise customer support, with AI-powered voice being its most demanding channel. Decagon needed infrastructure that could deliver sub-second responses under unpredictable traffic loads with tokenomics that supported 24\/7 voice deployments.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-89813\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/02\/decagon-togetherai-960x510.png\" alt=\"\" width=\"960\" height=\"510\"  \/>Decagon builds AI agents for customer support, and voice is its most demanding channel.<\/p>\n<p>Together AI runs <a target=\"_blank\" href=\"https:\/\/www.together.ai\/customers\/decagon\" rel=\"nofollow noopener\">production inference for Decagon\u2019s multimodel voice stack<\/a> on NVIDIA Blackwell GPUs. The companies collaborated on several key optimizations: speculative decoding that trains smaller models to generate faster responses while a larger model verifies accuracy in the background, caching repeated conversation elements to speed up responses and building automatic scaling that handles traffic surges without degrading performance.<\/p>\n<p>Decagon saw response times under 400 milliseconds even when processing thousands of tokens per query. Cost per query, which is the total cost to complete one voice interaction, dropped by 6x compared with using closed source proprietary models. This was achieved through the combination of Decagon\u2019s multimodel approach (some open source, some trained in house on NVIDIA GPUs), NVIDIA Blackwell\u2019s extreme codesign and Together\u2019s optimized inference stack.<\/p>\n<p>Optimizing Tokenomics With Extreme Codesign<\/p>\n<p>The dramatic cost savings seen across healthcare, gaming and customer service are driven by the efficiency of NVIDIA Blackwell. The NVIDIA GB200 NVL72 system further scales this impact by delivering a breakthrough <a href=\"https:\/\/blogs.nvidia.com\/blog\/mixture-of-experts-frontier-models\/\" rel=\"nofollow noopener\" target=\"_blank\">10x reduction in cost per token<\/a> for reasoning MoE models compared with NVIDIA Hopper.<\/p>\n<p>NVIDIA\u2019s extreme codesign across every layer of the stack \u2014 spanning compute, networking and software \u2014 and its partner ecosystem are unlocking massive reductions in cost per token at scale.<\/p>\n<\/p>\n<p>This momentum continues with the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/technologies\/rubin\/\" rel=\"nofollow noopener\">NVIDIA Rubin platform<\/a> \u2014 integrating six new chips into a single AI supercomputer to deliver 10x performance and 10x lower token cost over Blackwell.<\/p>\n<p>Explore <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/solutions\/ai\/inference\/\" rel=\"nofollow noopener\">NVIDIA\u2019s full-stack inference platform<\/a> to learn more about how it delivers better tokenomics for AI inference.<\/p>\n","protected":false},"excerpt":{"rendered":"A diagnostic insight in healthcare. A character\u2019s dialogue in an interactive game. An autonomous resolution from a customer&hellip;\n","protected":false},"author":2,"featured_media":470301,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[2293,182,181,507,218403,102155,35800,10370,74,218404,218405],"class_list":{"0":"post-470300","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-agentic-ai","9":"tag-ai","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-dynamo","13":"tag-inference","14":"tag-nvidia-blackwell","15":"tag-open-source","16":"tag-technology","17":"tag-tensorrt","18":"tag-think-smart"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/470300","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=470300"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/470300\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/470301"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=470300"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=470300"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=470300"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}