Chinese outfit Zhipu AI claims it trained a new model entirely using Huawei hardware, and that it’s the first company to build an advanced model entirely on Chinese hardware.

Zhipu, which styles itself Z.ai and runs a chatbot at that address, offers several models named General Language Model (GLM). On Wednesday the company announced GLM-Image, that it says employs “an independently developed ‘autoregressive + diffusion decoder’ hybrid architecture, which enables the joint generation of image and language models.” represents an important advance on the Nano Banana Pro image-generating AI.

The post also states that Z.ai developed the model using the Ascend Atlas 800T A2, a Huawei server that can run four Kunpeng 920 processors packing either 64 or 48 cores. Huawei’s processors use Arm cores of its own design.

The servers also use Huawei’s Ascend 910 AI processors.

The most recent Ascend model is 2025’s 910C, which Huawei claims “can achieve around 800 TFLOPS of computing power per card at FP16 precision, which is approximately 80% of the computing power of NVIDIA’s H100 chip (launched in 2022).”

On model-mart Hugging Face, Zhipu describes GLM-Image’s architecture as comprising two elements:

Autoregressive generator: a 9B-parameter model initialized from GLM-4-9B-0414, with an expanded vocabulary to incorporate visual tokens. The model first generates a compact encoding of approximately 256 tokens, then expands to 1K–4K tokens, corresponding to 1K–2K high-resolution image outputs.

Diffusion Decoder: a 7B-parameter decoder based on a single-stream DiT architecture for latent-space image decoding. It is equipped with a Glyph Encoder text module, significantly improving accurate text rendering within images.

The company says “the entire process from data preprocessing to large-scale training” took place using that Atlas server, and that the model’s debut therefore proves “the feasibility of training cutting-edge models on a domestically produced full-stack computing platform.”

And in some ways it does. But Zhipu hasn’t revealed how many servers or accelerators it used to create GLM-image, and how quickly they did the job.

The company can therefore point to having developed a model with local tech – sophistry that ignores Arm’s contribution to Kunpeng – but hasn’t offered any hints about whether Huawei’s hardware did it at a speed or price that means the rest of the world needs to take notice because China has stolen a march.

Even if Zhipu’s rig chugged along at modest speeds, news of an all-Chinese model remains notable given pundits’ predictions that many future models will be smallish affairs dedicated to niche domains. If China now has the capacity to make such models without hardware from Nvidia or AMD, that’s a threat to those chip design firms’ future revenue.

Another threat to the two GPU giants is the strict export controls, announced yesterday, that mean Washington will assess every application to sell certain GPUs to Chinese buyers.

GLM-Image is open source, so is freely available. The Register mentions that in light of think tank ASPI’s opinion that China uses AI to export its culture and values, and recommends nations need to “prevent China’s AI models, governance norms and industrial policies from shaping global technology ecosystems and entrenching digital authoritarianism.” ®