Fujitsu plans to develop an NPU fabricated on an advanced 1.4nm process by Rapidus, according to a Nikkei Asia report published today. The chip will be designed for AI inference in servers and related systems, with Japan’s New Energy and Industrial Technology Development Organization (NEDO) expected to cover approximately two-thirds of the estimated ¥58 billion ($363 million) initial development cost. The project would see the NPU made entirely domestically in terms of both design and manufacturing.
NPUs are dedicated AI inference processors distinct from general-purpose GPUs that dominate AI training. While GPUs excel in the parallel processing required to train LLMs, inference tasks perform better on NPUs, which handle calculations more efficiently. You’ll typically see NPUs in consumer devices like PCs and smartphones, but Fujitsu intends to deploy them in server systems.
Article continues below
You may like
Fujitsu, of course, doesn’t produce its own GPUs; it has existing partnerships with Nvidia and plans to connect its CPUs with Nvidia GPUs on the same substrate by 2030. It also has a separate AI chip partnership with AMD.
Japan’s government has been aggressively funding something of a semiconductor revival, with Rapidus having secured roughly ¥1.7 trillion in combined government and private investment to date, and Japan’s Ministry of Economy, Trade, and Industry nearly quadrupled its budgeted support for advanced semiconductors and AI development to approximately ¥1.23 trillion for the current fiscal year.
Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.