AI-native computing is redefining how progress happens — compressing years of development into days and accelerating the birth of intelligent systems built to learn, adapt and evolve.

Behind this surge are AI factories: the modern power plants of computation. These purpose-built data centers aren’t just optimizing for performance — they’re rewriting the rules of scale, energy efficiency and architectural design to support an entirely new generation of self-improving, AI-native applications. As innovation cycles shrink, the question becomes not whether infrastructure can keep up, but how fast it can reinvent itself.

Vipul Prakash, co-founder and CEO of Together Computer, talks to the CUBE about how AI-native applications are driving explosive growth during theCUBE + NYSE Wired: AI Factories - Data Centers of the Future event.

Together Computer’s Vipul Prakash discusses the growth of AI-native applications in relation to AI factories.

“When you had SaaS applications, the ones that were growing really rapidly … they maybe doubled in nine months,” said Vipul Prakash (pictured), co-founder and chief executive officer of Together Computer Inc. “That was considered to be very fast growth. We are seeing that happen to AI-native applications in nine days. We have customers who are scaling up so rapidly. Their products are so rewarding and are getting distributed internationally. That is creating this … immense need for AI computation and efficient AI computation.”

Prakash spoke with theCUBE’s John Furrier at theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the explosive growth of AI-native applications, the infrastructure required to support them and the emerging role of enterprises in building their own AI factories.

Defining AI-native applications and infrastructure

AI-native applications are those where the fundamental functionality of the application is driven by an AI model, according to Prakash. Unlike retrofitted apps that add AI features, these rely on models for their very existence. Examples include ChatGPT, Cursor and Hedra for video generation.

“This is different from introducing some AI features in traditional applications because the AI is so central to these applications,” he said. “Their requirements for efficiency, scale and growth of the underlying AI infrastructure are extreme. They’re really driving the need for building these AI factories rapidly, which consume tokens, to learn from them and then produce them at high throughputs and low latencies.”

Supporting AI-native applications demands specialized infrastructure — AI factories — built for massive throughput and nonstop availability across compute, storage and networking. As enterprises scale their AI initiatives, many are turning to open-source models, fine-tuning them with proprietary data to create tailored, high-performing solutions that rival closed-source alternatives, Prakash explained.

“Once you have millions of users, you are collecting a lot of data and success criteria for the results that you’re producing,” he said. “And that becomes a really great set of data to fine-tune an open-source model on. What we are seeing is that applications are using closed-source APIs, but then they are segmenting their traffic into self-built versions or adapted versions of open-source models, which they’re deploying with Together at scale.”

One of the biggest infrastructure challenges is data movement. AI systems require data to sit close to computation for rapid access during training, fine-tuning and inference. Together AI addresses this by building large, fabric-connected storage systems adjacent to models, ensuring minimal latency, Prakash noted.

“We have embodied systems that are models for robots that are being created, which have a fairly large data set, both as a starting data set and the generative data set,” he said. “We’ll see AI factories being fitted out with vast amounts of storage in the coming years.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event:

Photo: SiliconANGLE

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.