AI computing has reached a breaking point. As models balloon in size and complexity, memory capacity has become the true bottleneck. Majestic Labs, a startup founded by former Google and Meta engineers, has emerged to tackle this challenge head-on, launching with over $100 million in funding to reinvent how artificial intelligence systems are built and scaled.
The funding comes from a top-tier investor group led by Bow Wave Capital (Series A) and Lux Capital (seed), with participation from SBI, Upfront, Grove Ventures, Hetz Ventures, QP Ventures, Aidenlair Global, and TAL Ventures.
The funding will fuel hiring across AI data science, silicon design, and systems engineering, as well as pilot deployments to prove the platform’s performance edge. With this launch, Majestic Labs isn’t just building servers, but also building the future foundation for AI at scale.
The growing imbalance between compute and memory
Over the last decade, the scale of AI workloads has skyrocketed. The Stanford 2025 AI Index Report notes that training clusters double every five months, while datasets double roughly every eight. Yet memory infrastructure, the silent backbone of machine learning, has lagged behind.
Today, organisations routinely overprovision GPUs just to meet memory demands, wasting power, floor space, and budget. These systems, designed for compute-heavy but memory-light tasks, choke on today’s data-intensive workloads. The result is inefficiency, high costs, and a widening gap between what AI models need and what infrastructure delivers.
Breaking the “memory wall” with a new server architecture
Behind Majestic Labs is a powerhouse team: Ofer Schacham, Masumi Reynders, and Sha Rabii, the engineers who built Meta’s FAST (Facebook Agile Silicon Team) and Google’s GChips. Between them, they hold over 120 patents and have shipped hundreds of millions of custom silicon units, including the first AI processors for mobile devices.
Majestic’s core breakthrough is an AI server that delivers 1000x the memory capacity of a top-tier GPU, consolidating what would typically require ten or more racks of servers into a single system.
Its reimagined architecture, spanning both hardware and software, eliminates the long-standing “memory wall,” a performance drag caused by slow data transfer between processors and storage. This design enables over 50x performance gains while using less power and space.
Majestic servers also run natively with popular AI frameworks, letting organizations tap into massive memory pools without complex reconfiguration. The systems effortlessly handle emerging workloads such as large language models with huge context windows, agentic AI, mixture-of-experts, and graph neural networks, all of which require fast, memory-rich environments.
By rebalancing compute and memory, Majestic enables efficiency at every level, reducing data center sprawl, cutting cooling costs, and allowing AI models to run faster and smarter.
“Majestic is built on a simple and powerful insight: AI’s next leap forward will come from access to more powerful AI infrastructure, and more powerful AI infrastructure requires a reimagination of the memory system,” said Co-Founder and CEO Ofer Shacham. “Majestic servers will have all the compute of state-of-the-art GPU/TPU-based systems coupled with 1000x the memory. Our breakthrough technology packs the memory capacity and bandwidth of 10 racks of today’s most advanced servers into a single server, providing our customers with unprecedented gains in performance and efficiency while slashing power consumption.”
“Majestic allows for a level of scalability and operational efficiency that simply isn’t possible with traditional GPU-based systems,” said Co-Founder and President Sha Rabii. “Our systems support vastly more users per server and shorten training time, lifting AI workloads to new heights both on-premises and in the cloud. Our customers benefit from tremendous improvements in performance, power consumption and total cost of ownership.”
“Majestic has engineered a new system for AI from the ground up, encompassing silicon, IO, packaging, and software, that is specifically tailored for the most advanced AI workloads,” said Shahin Farshchi, PhD, Partner at Lux Capital. “The team is making the most powerful AI accessible at an unprecedented scale, providing an opportunity to truly reshape how AI is delivered globally.”
“This is an extraordinary team that has identified today’s most critical constraint in AI,” said Itai Lemberger, Founder of Bow Wave Capital. “Majestic has a clear understanding of customer pain points and is focused on addressing the critical challenges in scaling AI inferencing: performance, power and efficiency. Majestic is the best solution we’ve seen for addressing the memory wall problem facing all AI models. Their architecture literally replaces multiple racks with a single server.”
“AI infrastructure is scaling at unprecedented speed, but the industry has not solved key fundamental architectural inefficiencies,” said Co-Founder and COO Masumi Reynders. “Majestic addresses this by delivering immediate operational gains on today’s workloads while maintaining full programmability and flexibility to adapt as AI evolves beyond transformer-based models.”