According to a report by Xin Dongxi on September 17th, today, the 2025 Global AI Chip Summit was held in Shanghai. 42 representatives of experts from industry, academia, and research, as well as pioneer entrepreneurs in the field of AI chips, discussed their latest observations and thoughts on the innovation, implementation, survival, and breakthrough of Chinese AI chips in the second half of the large – model era.
As always, the conference brought together new and established domestic AI chip players, core ecosystem chain enterprises, and representatives of investment institutions. It presented in – depth technical and industrial insights and comprehensively analyzed the popular development directions of AI chips.
This summit was jointly hosted by Zhixingxing, a subsidiary of Zhiyi Technology, and Xin Dongxi. With the theme of “AI Large – scale Infrastructure, Intelligent Chips for a New World”, it consisted of a main forum, special – topic forums, technical seminars, and an exhibition area, covering cutting – edge topics such as large – model AI chips, architectural innovation, computing – in – memory integration, and super – node and intelligent computing cluster technology. AWE was also the strategic cooperation institution for this summit.
Notably, many AI chip companies revealed significant information at the conference. For example, several senior investors shared the criteria they value when investing in AI chip companies. A super – node startup raised 600 – 700 million yuan in just a few months after its establishment. Yuntian Lifa is developing a new – generation NPU, Nova500. Huawei Ascend will fully open – source CANN in December. The next – generation chip of Sunrise is designed to have a large – model inference cost – performance ratio comparable to that of NVIDIA’s Rubin GPU.
In the exhibition area, 11 exhibitors, including Chaomo Technology, Kuixin Technology, Teledyne LeCroy, Alphawave, Xinklai Technology, Achronix, Sunrise, Juliang Wuxian, AWE, Andes Technology, and Xinmeng Technology, presented their products.
â–² A corner of the exhibition area
As a representative of the organizer, Gong Lunchang, the co – founder and CEO of Zhiyi Technology, delivered a speech. Since March 2018, the Global AI Chip Summit has invited over 180 industry, academic, and research experts to share industry trends and insights. It has become the only continuously held and widely influential industrial summit in the field of AI chips and an important window to understand the dynamics of domestic and international AI chips.
â–² Gong Lunchang, co – founder and CEO of Zhiyi Technology
Gong Lunchang also announced the upcoming 2025 China Embodied Intelligence Robot Conference, which will be held in Shenzhen at the end of November this year, and welcomed everyone to participate and exchange ideas.
Note: This article summarizes the highlights of the main forum and the large – model AI chip special – topic forum. More related reports will be released later.
I. Professor Wang Zhongfeng, IEEE Fellow: Interpreting the Three Cutting – edge Directions of AI Chips
To address the three major challenges of the “Moore’s Law – breaking” growth of model scale, the “memory wall” of traditional architectures, and the increasing diversification of application scenarios, Professor Wang Zhongfeng, the dean of the School of Integrated Circuits at Sun Yat – sen University and an IEEE/AAIA Fellow, explored the three cutting – edge directions of AI chip design, providing valuable insights and guidance for the industry’s development.
Firstly, model – driven efficient chip design. In the trend of increasing model size, hardware should be deeply adapted to the characteristics of AI models, rather than limiting the development of models by hardware resources.
The research work on the Transformer hardware acceleration architecture design proposed by Professor Wang Zhongfeng’s team is the first complete solution to the challenge of accelerating Attention calculation, winning the Best Paper Award at the 2020 IEEE Symposium on Systems on Chip (SOCC). The N:M sparse Transformer inference acceleration framework can quickly develop and deploy Transformer models with any N:M sparse ratio while maintaining stable accuracy. The coarse – grained and fine – grained mixed – precision quantization, combined with a dedicated multi – core accelerator to handle differential calculations, enables more flexible scheduling.
Secondly, application – driven AI chip innovation, which focuses on the implementation and application of models and explores a balance between energy efficiency and flexibility.
There is no one – size – fits – all solution for architectural innovation, only more suitable ones. Combining reconfigurable hardware architectures (dynamically adapting to different algorithm requirements), domain – specific architectures (achieving higher energy efficiency than general architectures in vertical scenarios), and advanced packaging technologies such as Chiplet (improving design flexibility, reducing costs, and shortening the time – to – market), AI chip design driven by applications will be an important research direction worthy of exploration in the future.
Thirdly, chip design based on computing – in – memory integration, which reduces energy consumption from the root of the computing – in – memory architecture and balances performance and power consumption.
The computing – in – memory architecture is an important direction for the paradigm shift in chip design. The digital computing – in – memory architecture has the advantages of high precision, high stability, and a more mature ecosystem, but it also has problems such as high energy consumption, high hardware overhead, and low storage density. The analog computing – in – memory architecture has the advantages of low energy consumption, high storage density, and low hardware overhead, but it has lower precision, high process requirements, and an immature ecosystem. The large – model accelerator based on the SRAM – based digital computing – in – memory architecture developed by Professor Wang Zhongfeng’s team supports multiple data precisions and can improve the energy – efficiency ratio by dozens of times compared with the traditional von Neumann architecture.
The above three paths are not isolated but support each other, jointly promoting the development of AI chips from “general adaptation” to “precise customization”.
â–² Professor Wang Zhongfeng, dean of the School of Integrated Circuits at Sun Yat – sen University and an IEEE/AAIA Fellow
Professor Wang Zhongfeng summarized that the current development of AI chips shows three key trends: Firstly, the “specialization” from general computing to domain – specific computing; secondly, the “synergistic evolution” of algorithms, software, and hardware; thirdly, the “integration” through new computing methods to break performance bottlenecks.
Taking the SRDA (System – level Minimal Reconfigurable Data Flow), a dedicated architecture for AI computing, as an example, through innovations such as distributed 3D memory control technology, reconfigurable data – flow computing architecture, and system – level simplified software – hardware integration design, it can significantly improve the utilization rate and performance of AI computing power in large – model intelligent computing scenarios, enabling future AI computing power chips based on domestic processes to achieve performance comparable to that of foreign GPGPUs with more advanced process nodes.
The development of the next – generation computing paradigm first requires the symbiosis of software, algorithms, and hardware to achieve the coordinated evolution of all links. Secondly, it needs to achieve ubiquitous, efficient, and trustworthy intelligent computing, including building giant super – computing systems for AGI training in the cloud, realizing real – time decision – making brains for autonomous robots at the edge, and developing ultra – low – power Always – On perception chips at the device side.
In addition, to efficiently support intelligent computing, it is also necessary to promote the integration of emerging technologies and realize the potential combination of photon computing, quantum computing, and AI chips.
Professor Wang Zhongfeng called for open standards to promote the openness of interfaces, interconnections, and instruction sets, reducing the threshold for innovation. He also emphasized the importance of in – depth cooperation between industry, academia, and research to jointly tackle technical problems in fields such as the integration of quantum and intelligence, computing – in – memory integration, new materials, new processes, and new devices. Moreover, he stressed the need for talent cultivation to train interdisciplinary talents with skills in algorithms, architectures, underlying circuits, and software development.
II. High – level Dialogue: Domestic Computing Power to be Ignited in the Second Half of the Large – Model Era, and the IPO Wave of AI Chips to be More Imaginative
The high – level dialogue, themed “Breaking Through and Breaking Out of the Predicament for Chinese AI Chips in the Second Half of the Large – Model Era”, was hosted by Zhang Guoren, the co – founder of Zhiyi Technology and the editor – in – chief of the Intelligent Vehicle, Chip, and Industry Media Matrix. Four guests, including Wang Fuyu, a partner of Hel Capital; Jiang Chun, a managing partner of Puhua Capital; Liu Shui, the managing director of BV Baidu Ventures; and Zhao Zhanxiang, the founding partner of IO Capital, shared their views.
Zhang Guoren said that the second half of the large – model era is not only a technological competition but also an ecological competition. He expected the emergence of vertical integrators combining “chips, scenarios, and algorithms” in China and more single – item champions.
â–² Zhang Guoren, co – founder of Zhiyi Technology and editor – in – chief of the Intelligent Vehicle, Chip, and Industry Media Matrix
1. After DeepSeek Expands the Computing Power Pool, What Do Investors Look for in AI Chip Companies?
Wang Fuyu believed that the emergence of DeepSeek means the appearance of a “Leading Customer” in China, and good technology companies will put forward requirements to chip companies. Jiang Chun further added that the most significant meaning of DeepSeek is that China now has its own large – model system, providing a platform for domestic chips.
What kind of AI chip teams do these senior investors tend to invest in? Several investors all value whether the technical route of the enterprise is convergent.
Zhao Zhanxiang especially focuses on whether there are improvements and innovations in the technical route. Liu Shui mentioned that BV Baidu Ventures does not simply measure the value of a project by its commercialization. Jiang Chun said, “Only kids make choices; adults choose all.” He said that he invests in both mature and innovative technologies. Wang Fuyu divided the market into two types. One is a relatively certain market, which tests the team’s accumulation and execution ability, and the other is a market driven by technology.
Looking forward to the future opportunities for chip companies, Jiang Chun believed that before the carbon – based civilization is replaced by the silicon – based civilization, the market opportunities for computing power are endless, and the market prospects are infinite. The current technology system is not the end.
2. A Super – Node Startup Raised 600 – 700 Million Yuan in a Few Months after Its Establishment
Wang Fuyu said that many large companies are also building network architectures in a non – all – in – one way. In the future, there will be a variety of development paths, and enterprises need to be open – minded and sensitive.
In Jiang Chun’s view, for Chinese enterprises facing the current situation, the scale – out route represented by the “primitive weapons” and the scale – up route represented by super – nodes are at least equally important.
Zhao Zhanxiang revealed that an super – node startup that IO Capital is currently interested in raised 600 – 700 million yuan in just a few months after its establishment. However, behind the opportunities of super – nodes, challenges still remain in terms of network reliability and failure – rate requirements.
BV Baidu Ventures has invested in many embodied intelligence companies. According to Liu Shui, embodied intelligence is an emerging field. As the core hardware support, chips are still in the iterative stage. The industry is still exploring mature chip products that can perfectly match various complex physical interaction scenarios.
At present, many enterprises choose to combine x86 CPUs and AI chips to build basic computing power platforms. This is a very natural transitional choice in the process of technological evolution, which can quickly verify product logic and run through preliminary scenarios.
This “transitional” nature is also an opportunity for the industry. In the future, whether it is the development of dedicated chips more suitable for embodied characteristics or the optimization of computing power efficiency based on existing hardware, as long as it can solve the pain points in actual scenarios, it is an opportunity for industrial development.
3. Cambricon Once Topped the A – Share Market, “Carrying the Hopes of the Whole Village”
Although these investors mainly focus on the primary market, they also shared their observations on the secondary market. They generally believed that the AI chip companies going public in the future will be more imaginative than the domestic substitution – concept chip companies that listed on the Science and Technology Innovation Board in 2019.
This year, Cambricon once surpassed Kweichow Moutai to become the “king of stocks” in the A – share market. In Jiang Chun’s view, the sharp rise of Cambricon may “carry the hopes of the whole village”. Compared with the previous wave of chip companies going public, the market for the domestic substitution concept was limited at that time, but now the demand in the AI market is infinite.
Liu Shui added that the demand for AI is injecting strong impetus into the construction of computing infrastructure. Currently, many domestic