When I first lived in China nearly a decade ago, I was stunned by the dizzying scale and sophistication of its digital infrastructure. So when a driverless taxi ferried me through Shanghai last September (no safety supervisor), it felt almost routine. Using omnipresent “everything” apps (WeChat & Alipay) to connect with urban services, pay for street food, and build a modest following on Bilibili (9 followers and counting) was normal. Catching the hotel lift with a little autonomous robot was fun but unextraordinary.

What did surprise me, however, was a trip to the newly built Shenzhen Science and Technology Museum. I expected a typical celebration of the march of progress. Instead, the exhibit framed recent AI advances through themes of national rejuvenation, social stability, and collective purpose. The contrast with Western narratives – which tend to foreground market innovation and risk – was striking.

While China and the US-led West are racing to develop AI, they are moving in fundamentally different directions, guided by different stories about what technology is for.

Cultural narratives make certain policy responses feel natural, or even inevitable. They shape adoption by determining who has authority to act, how policy gets written, and whether institutions move quickly or cautiously. In practice, this plays out through mission, market, and risk frames.

Institutions respond to predictability, not hesitation.

In 2022, a month before the public launch of ChatGPT, the White House Office of Science and Technology Policy published the AI Bill of Rights. Though non-binding, the document put emphasis on a risk-driven approach to innovation policy, anticipating potential rights infringements by automated systems. Two years later, the European Parliament passed the EU AI Act – the first purpose-built, AI-specific legislation – codifying this risk framing into market entry requirements and economy-wide rights protections.

By positioning AI as a threat, risk-framing justifies the introduction of non-binding guidance like the AI Bill of Rights alongside binding constraints such as the EU AI Act.

This is very different from China’s foundational AI strategy document, New Generation Artificial Intelligence Development Plan, which focuses a mission-driven narrative:

“…accelerate the deep integration of AI with the economy, society and national defence … comprehensively enhance society’s productive forces, comprehensive national power, and national competitiveness … to achieve the two centennial goals and the great rejuvenation of the Chinese nation.”

This language speaks of mobilising resources, not pre-empting rights infringements. It signals a need for coordination and positions AI as a national industrialisation project rather than some commercial technology.

In China, mission narratives such as “the great rejuvenation of the Chinese nation” legitimise the use of existing state centralisation capacity for AI policy.

In the West, risk framing creates a “negative mandate” that legitimises the introduction of policy constraints against the counter-weight of market-led narratives.

Differences in framing matter for the adoption of AI because they shape how much policy clarity institutions receive. Institutions respond to predictability, not hesitation.

China’s establishment of the National Data Administration in 2023 is one example of centralisation capacity being wielded to telegraph a stable environment and predictable regulation. In the West, and particularly Australia, responsibility for the regulation of data is fragmented between multiple independent organisations, such as the Australian Competition and Consumer Commission and the Office of the Australian Information Commissioner. Institutions operating in this multi-mandate environment must choose between competing priorities, such as privacy or innovation.

Western reliance on risk narratives translates into policy ambiguity that slows coordination. By framing AI as a safety issue and not a resource mobilisation challenge, review gets prioritised over scaling. Authority gets routed through an already fragmented set of safety-oriented specialist regulators, rather than centralised for coordinating AI adoption.

Regulated market participants have to wait for risk-oriented authorities to issue them directives or wade through conflicting guidance. Unregulated actors manage a lighter friction, choosing whether to opt-in to different market-coordinated products and services. In both instances, friction slows adoption. Regulated industries are effectively frozen (with pockets of shadow adoption) while unregulated ones continue – albeit cautiously.

A series of conversations I’ve had with 25 Australian, Chinese, and American professionals tell a consistent story. Western policy ambiguity is real and felt across industries. Clinicians, educators and healthcare workers describe a kind of regulatory limbo: aware that AI tools are being used informally (i.e., shadow adoption), but uncertain about what their institutions permit. Whether this reflects caution or paralysis, the institutional environment is actively shaping adoption behaviour.

While we navigate this limbo, others continue with a cleaner script. Recently, a Beijing friend came to visit around Chinese New Year. Late one night, over beef strips and Tsingtao at a kitchen table in Warrandyte, I explained what I’d been writing about – how China and the West seem to be telling different stories about what AI is for. He laughed – of course – before offering a Chinese idiom I hadn’t heard before: 众志成城. There’s no clean English equivalent. That’s probably the point.