Virtually all hyperscale cloud service providers (CSPs), as well as some of the leading developers of AI accelerators nowadays, have their own custom-silicon programs that are focused not only on developing AI accelerators, but also on custom general-purpose CPUs usually based on the Arm instruction set architecture (ISA). Over the next several years proliferation of custom CPUs based on the Arm ISA inside AI servers will increase to 90%, leaving x86 and Arm around 10%, according to Counterpoint Research.
x86 processors from AMD and Intel have long dominated general-purpose servers, which is why most of the AI servers initially relied on Opteron and Xeon processors. However, Arm-based custom CPUs that are tailored for specific data-intensive AI workloads are more cost and power-efficient. Furthermore, given the fact that AI workloads are emerging workloads, backward compatibility with x86 is not vital. To that end, AWS, Google, and Microsoft have developed their own proprietary Arm-based processors for their own workloads, whereas Meta is the alpha customer for Arm’s own AGI processor.
Article continues below
You may like
Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.