3D illustration of glowing blue "AI" text on a computer chip, dark background with circuit board texture. Anggalih Prasetya / Shutterstock.com

Nvidia (NVDA) holds 88% of the data center GPU market. Advanced Micro Devices (AMD) launched its AI chip efforts just two years ago.

AMD’s Helios platform matches Nvidia’s NVL72 system with 72 MI455X chips delivering 3 AI exaflops per rack.

AMD’s MI500 series targets 1,000x performance gain over MI300X by 2027 using 2nm process and HBM4E memory.

If you’re thinking about retiring or know someone who is, there are three quick questions causing many Americans to realize they can retire earlier than expected. take 5 minutes to learn more here

Nvidia (NASDAQ:NVDA) has maintained a long head start in artificial intelligence (AI) chips, rapidly advancing to the forefront and solidifying its dominance as the primary driver of the AI revolution. The company’s early focus on graphics processing units adapted for AI workloads allowed it to capture the majority of the market for data center accelerators and become the face of the AI boom.

In contrast, Advanced Micro Devices (NASDAQ:AMD) entered the AI chip space much later, effectively starting from a minimal base around two years ago with the launch of its Instinct MI300X in late 2023. Yet recent developments show AMD has quickly narrowed the gap, achieving competitive performance in certain AI applications and positioning itself to potentially overtake Nvidia in key areas.

Nvidia continues to hold the top position in AI chips, commanding a substantial market share that dwarfs competitors. Estimates place Nvidia’s share of the data center GPU market at around 92%, a level it is likely to maintain for years due to its established ecosystem and customer lock-in through proprietary software like CUDA. This dominance stems from consistent innovation and high demand for its products, such as the Blackwell chips, which have seen strong uptake in training large AI models.

However, AMD’s recent moves suggest the possibility of a shift. While Nvidia remains the benchmark, AMD’s advancements could challenge this status by offering alternatives that appeal to cost-conscious buyers and those seeking open standards.

At CES 2026, Nvidia announced its next-generation Rubin platform, including the Vera Rubin chip, marking a significant step beyond the Blackwell architecture. The Rubin platform features six new chips, such as the Vera CPU and Rubin GPU, designed to form an AI supercomputer. It promises up to a 10x reduction in inference token costs and four times fewer GPUs for training mixture-of-experts models compared to Blackwell. The platform is now in full production, with availability expected in the second half of 2026. This builds on the high demand for Blackwell, which has powered major AI deployments.

Story Continues

AMD countered at the same event by revealing its Helios rack-scale platform, equipped with Instinct MI455X accelerators, EPYC Venice CPUs, and Pensando Vulcan NICs. Helios delivers up to 3 AI exaflops per rack — that’s three quintillion, or three billion billion, floating-point operations per second — targeting trillion-parameter model training with high bandwidth and energy efficiency. It matches Nvidia’s NVL72 system, which uses 72 Rubin GPUs, by also incorporating 72 MI455X chips.

AMD is positioning Helios as a more memory-rich and cost-effective option within an open ROCm software ecosystem, contrasting with Nvidia’s focus on maximum raw training compute power in its proprietary setup. It calls Helios the “blueprint for yotta-scale compute” — or one septillion (1 followed by 24 zeros) — and says global compute capacity is projected to grow to 10 yottaflops or more in the next five years.

While my brain can’t process these numbers, it is clear AMD is planning to have a leadership role in AI’s future.

AMD’s potential to surpass Nvidia may hinge on its upcoming Instinct MI500 series, set for launch in 2027. Built on CDNA 6 architecture, advanced 2 nanometer (nm) process technology, and HBM4E memory, the MI500 is projected to provide up to a 1,000x increase in AI performance over the MI300X. This leap accounts for architectural improvements, new low-precision formats like FP4, faster interconnects, and enhanced memory.

If AMD’s current offerings, like the MI455X in Helios, achieve parity in specific use cases — such as inference or enterprise deployments — the MI500 could push AMD ahead in overall AI efficiency and scalability.

Current assessments indicate AMD is at or near parity with Nvidia in some metrics, depending on the workload, thanks to its open approach that allows easier integration for developers avoiding any proprietary lock-in. This could attract more hyperscalers and enterprises looking for alternatives in an environment of rising AI infrastructure costs.

Nvidia shows no signs of slowing, with work already underway on its next-generation architecture after Rubin, code-named Feynman, that is scheduled for release in 2028. Feynman will build on Rubin’s foundation, incorporating a Vera CPU and aiming to further accelerate AI across domains.

Yet AMD is matching this pace, with its MI500 preview signaling an aggressive stance on narrowing the gap with its rival. What was once Nvidia’s solo lead in AI chips has quickly evolved into a tight two-horse race between the titans of the industry.

You may think retirement is about picking the best stocks or ETFs, but you’d be wrong. Even great investments can be a liability in retirement. It’s a simple difference between accumulating vs distributing, and it makes all the difference.

The good news? After answering three quick questions many Americans are reworking their portfolios and finding they can retire earlier than expected. If you’re thinking about retiring or know someone who is, take 5 minutes to learn more here.