Credits

Dang Nguyen is a writer and researcher of AI, culture and aesthetics. Her work traces the informal infrastructures and moral frictions of digital life, focusing on Southeast Asia and the technological practices that emerge under conditions of constraint. Dang is a Majority World Scholar at Yale Law School and an incoming Bellwether Scholar at the University of California, Berkeley School of Information.

HANOI, Vietnam — It’s June 20. There’s a velvet backdrop, LEDs pulsing cyan and a giant banner declaring the launch of a new national AI alliance. FPT Corp. chairman Truong Gia Binh, who heads one of the country’s leading IT and telecommunications companies, strides onstage, quoting a wartime slogan — “Nothing is more precious than independence and freedom” — before proceeding to cast artificial intelligence as Vietnam’s next great battle, an existential fight for the country’s future. Around him are rectors from top universities, ministers hunched over sleek tablets, startup founders livestreaming from the aisles.

The question everyone expects, the one the world keeps asking, hangs in the air: Is it the U.S. or China? Which AI superpower will Vietnam choose?

But Binh flips the script. FPT, he announces, will open its “core tech stack” — language models, cloud infrastructure, even training data — to any domestic partner who wants to build with it. Binh outlined three commitments: FPT would open a national sandbox for controlled experimentation with the aim of creating a locally trained GPT-style model by year’s end and support a state-backed push to teach AI in schools. These three commitments are a refusal — “We don’t stand on the shoulders of giants,” an FPT executive later tells the crowd. “We walk beside them.”

The applause swells.

In computing, a “stack” is simply the layered architecture that makes technology run: chips and circuits at the base, then operating systems, then applications, all the way up to the user interface. Each layer builds on the one below. Decisions made at one level cascade upward. Which is why choices about the stack are never just technical — they decide who holds power, and who must follow.

Banners at the FPT event promise an open, comprehensive and state-regulated electronic ecosystem. The familiar poles of AI politics — Silicon Valley’s proprietary platforms and Beijing’s centralized infrastructure — are never named, but everyone in the room understands what is being contested: who gets to define the terms of intelligence itself. The stakes are stack-level choices — black-box dependence or modular improvisation; opacity or legibility; someone else’s roadmap or a sovereign design of your own. In practical terms, the decision is the difference between paying for access to OpenAI’s closed API and fine-tuning an open-weight model on a café’s shared GPU rig — between consuming intelligence as a service and composing it as an act of sovereignty. One rents a mind, the other trains its own in the wild.

This is, in essence, a claim to AI sovereignty: the ability to build and govern infrastructures on Vietnam’s own terms while still enabling cross-border flows of data, talent and computation. AI sovereignty here does not mean isolation, but authorship — deciding which data, models and rules shape, and will shape, how machine intelligence is built and deployed.

In short, Vietnam is not picking sides. It is building a third stack.

Infrastructural Nonalignment

Many view AI geopolitics as a culture war between Silicon Valley’s libertarian individualism versus China’s communitarian authoritarianism. That familiar tableau of cowboy disruptors and state-backed titans still lingers in op-eds, but it obscures the quieter territorial redrawing that’s occurring along the infrastructural level. Baidu, long positioned as China’s national champion in AI, has been eclipsed by a wave of leaner, more research-oriented Chinese labs like Z.ai, formerly known as Zhipu AI, Baichuan Intelligence and MiniMax. These newer actors release open-weight models and invite scrutiny, blurring the assumed line between authoritarian opacity and democratic transparency.

The sharper fault line now runs not between nations but infrastructures — between the guarded logic of proprietary systems and the unruly emergence of open-weight models; between centralized command and distributed improvisation; between the doctrine of safety and the discipline of scrutiny. If OpenAI, Anthropic, and Google DeepMind’s frontier models have largely represented the logic of enclosure, then more open-weight projects like DeepSeek and Meta’s LlaMA — not fully open-source but released in ways that allow retraining and scrutiny — gesture toward a counter-current that is partial, constrained, yet powerful in its transnational diffusion. Even as OpenAI has more recently released “open models,” the broader movement of open-weight diffusion cuts across borders, destabilizing the notion that AI will crystallize into two superpower-led blocs.

In other words, culture is not what is being exported; technology stacks are.

What travels across borders aren’t values per se, but configurations of infrastructure: model weights, licensing schemes, data regimes, cloud dependencies and developer ecosystems. These are the substrates through which AI systems are made legible, tractable and governable. It is these substrates — rather than grand narratives about freedom or control — that shape how knowledge is produced, validated and operationalized.

“Is it the U.S. or China? Which AI superpower will Vietnam choose?”

Vietnam’s position in this landscape is telling. Neither fully aligned with the U.S. nor China, it is assembling a third stack that draws selectively from both sides while cultivating its own infrastructural sovereignty. Through state-linked firms like FPT, domestic LLM research and partnerships with groups like U.S.-based Nvidia, Japan’s NTT Data Group and China’s Huawei that straddle geopolitical divides, Vietnam exemplifies a mode of infrastructural nonalignment: modular, adaptive and deeply attuned to the asymmetries of global AI. In declaring its own stack, Vietnam claims the right to decide how reality itself is translated into machine-readable form — what becomes visible, knowable and actionable to AI systems.

FPT’s stack is beginning to take discernible form. Unveiled in Japan in late 2024, the company’s AI Factory, a high-performance computing hub designed to train and deploy large AI models, is anchored by California-based Nvidia’s accelerated computing platform, equipped with thousands of H100 and H200 superchips. Wrapped in the Nvidia AI Enterprise suite and the NeMo framework, this infrastructure undergirds FPT’s growing portfolio of Vietnamese-language models and vision systems. These models are served through FPT Smart Cloud, the firm’s sovereign cloud platform, which allows for flexible deployment — on-premises, on local servers or devices closer to where data is generated or within domestic data centers. 

The architecture is modular by design, satisfying Vietnam’s data-residency requirements by localizing storage and compute within Vietnam’s jurisdiction, while containerizing models and APIs so they can be deployed across borders. Backed by Japanese capital from Sumitomo Corp. and SBI Holdings, a Tokyo-based financial services group, FPT is also investing in regional data infrastructure to expand storage and processing capacity across Southeast Asia, along with sector-specific tuning programs that adapt models for use in industries like healthcare, finance and transportation. Here we have not only a singular, unified stack, but also a composable system: compute, weights and cloud services stitched together in a form that can be tuned to whatever relevant context — health, finance, mobility — at home or abroad, on Vietnam’s own terms.

At the top of its stack, FPT has introduced a pair of platforms — AI Studio and AI Inference — that are designed to give Vietnamese developers and enterprises greater control over how AI models are adapted and deployed. Launched in April, these tools extend the AI Factory’s reach beyond infrastructure into application and authorship. AI Studio provides a fine-tuning environment that’s built on Nvidia’s NeMo framework, a toolkit for customizing and retraining large models like DeepSeek-R1 and Llama 3.3 on internal or domain-specific datasets. AI Inference, by contrast, serves as the production layer: offering a catalogue of pretrained models — over 20 at launch in April— available via API for rapid integration into enterprise workflows. Both operate atop the same GPU backbone as the Factory itself, ensuring continuity between experimentation and execution.

The result is a stack that is assembled, rather than monolithic, with domestic platforms, a sovereign cloud, high-performance compute and transnational research folded into its modular system. Each of its layers carries different dependencies, but together they allow Vietnam to hold authorship over the shape, orientation and reach of its AI infrastructure — one that’s stable enough to anchor public deployment and open enough to adapt or travel. Together, these platforms complete the circuit: from compute to model to use-case, all within an architecture that remains legible, governable and adaptable. The ambition here is not just technical performance, but epistemic discretion — the ability to decide which models are retrained, how they are tuned and for whom they are made to speak.

Platforms(FPT AI Studio, FPT AI Inference)↑Models(DeepSeek-R1, Llama 3.3)↑Data(publicly available, domain-specific)↑Compute(AI Factory, FPT Smart Cloud)An Instance Of The Third Stack

In parallel, FPT has continued to deepen its strategic alignment with Mila, the Quebec AI institute founded and advised by AI pioneer Yoshua Bengio. The partnership, which began in 2020 and was renewed in 2023, links Vietnam’s largest tech conglomerate with one of the world’s leading research centers in deep learning and responsible AI. On paper, the collaboration is focused on advancing large language models (LLMs) and natural language processing. But its significance runs deeper: It is a quiet counterexample to the prevailing narrative of AI as an ideological battleground. Rather than choosing between spheres, FPT is building connective tissue — embedding Vietnamese researchers within Mila’s lab, circulating knowledge across borders and shaping governance standards from a position that is neither defensive nor derivative. In a landscape where openness is often declared but rarely reciprocal, this is what infrastructural diplomacy can look like.

“Neither fully aligned with the U.S. nor China, it is assembling a third stack that draws selectively from both sides while cultivating its own infrastructural sovereignty.”

Vietnam is not alone in forging a third path. In Malaysia and Indonesia, the development of a Nusantara-style AI strategy, which frames AI design around Indonesia’s plural linguistic and cultural infrastructures by prioritizing multilingual corpora and local cultural knowledge, reflects an ambition to build systems attuned to the neighboring countries’ extraordinary linguistic and cultural diversity; it is an infrastructural project as much as a symbolic one. The United Arab Emirates (UAE), meanwhile, has positioned itself as a regional vanguard through the release of Falcon, a series of open-weight language models that signal both technical capacity and the sovereign intent to develop and license its own models rather than depending on U.S. or Chinese systems. Taken together, these initiatives point to a wider shift: not a rejection of global AI paradigms, but a refusal to be wholly contained by them.

If infrastructure, not ideology, is travelling, how do export-controlled chips, data residency laws or safety regimes map onto a nonaligned stack — one built outside the U.S.–China duopoly that draws from both, but is governed locally? What happens when the aspiration to sovereign configuration runs into the hard limits of material interdependence — when the cloud is not local, the chip is embargoed or the licensing regime bakes in foreign oversight? The challenge for what I call nonaligned builders — those operating in third-party countries outside of the U.S.-China binary — is not just to assemble stacks that work, but to govern stacks that remain legible under pressure. Their task is to hold open the space between technical borrowing and epistemic capture. In this emerging order, the real question is not whether nations can build independently, but whether they can stay in control of what their systems are allowed to know, remember and act upon. The third stack holds an exquisite contradiction: It both evades and entangles itself with old powers.

Vietnam’s advantage may lie not in self-sufficiency but in strategic bricolage: The ability to assemble a working stack from mismatched parts, to fine-tune open weights from both China and the U.S. on GPUs bought in part with Japanese capital, and to deploy them on sovereign cloud infrastructure that complies with Vietnamese law but draws from global standards. In this way, the third stack is not sealed off but selectively permeable: borrowing where it must but governing what it borrows — even if stitching together components from competing powers requires constant negotiation of technical standards and political constraints.

The third stack may never match the scale of its U.S. or Chinese counterparts, but that is the point. Its advantage lies in asymmetrical scaling: in tuning for context, licensing with constraint and extending its reach not by dominating the field, but by slipping beneath it. Consider SemiKong, an open-weight large language model developed for the semiconductor industry through a collaboration between FPT Software, Silicon Valley-based Aitomatic and Tokyo Electron Ltd. Built on Meta’s Llama 3.1 architecture, SemiKong outperforms general-purpose models like GPT in sector-specific tasks — an illustration of how sovereign capability can be exercised not through scale, but through precision. By contributing to an open-source, transnational effort that aligns with its own industrial priorities, Vietnam inserts itself not as a peripheral adopter but as a co-author of global AI infrastructure. This is asymmetry as strategy: composing relevance not by competing at the center, but by accruing influence at the edge.

Even when components like chips, frameworks or toolkits are foreign, Vietnam retains leverage through procedural sovereignty: the ability to constrain how data moves, where models are trained and under what terms systems are deployed. If the lower layers of the stack remain entangled in foreign supply chains and architectures, the upper layers offer room to assert rules, policies and frictions that subtly reroute control.

The passage of Vietnam’s first-ever Law on the Digital Technology Industry in June marks a turning point in this strategy. While the EU builds its AI regime through risk tiers — classifying systems as unacceptable, high-, medium- or low-risk with corresponding obligations — and the U.S. leans on voluntary disclosure, where companies pledge transparency rather than comply with binding rules, Vietnam’s approach is more infrastructural: classifying digital systems as strategic assets and binding them to pre-approval requirements, domestic data handling and sectoral oversight. The law does not aim to lead through values or scale, but through configuration — embedding sovereignty not in rhetoric, but in the mechanics of deployment.

“Vietnam’s advantage may lie not in self-sufficiency but in strategic bricolage.”

California-based Qualcomm’s establishment of its AI R&D center in Hanoi on June 10, however, reveals the entangled logic of infrastructural nonalignment. As Qualcomm’s third-largest facility worldwide — after India and Ireland— the Hanoi center is tasked with developing generative and agentic AI across domains ranging from smartphones and XR — the umbrella term for virtual, augmented and mixed reality — to automotive systems and the sundry connected devices known as the “internet of things.” At first glance, the move dovetails neatly with Vietnam’s national strategies on AI, semiconductors and digital transformation, with their emphasis on technology transfer, ecosystem development and workforce capacity. The partnership exemplifies Vietnam’s strategy of courting foreign investment while cultivating domestic sovereignty.

But the familiar contradictions of such arrangements remain. What enters under the banner of knowledge exchange may calcify into dependency — on imported architectures, inherited standards, embedded design assumptions. This dynamic sharpened in an earlier move in April, when Qualcomm quietly acquired MovianAI, a Vietnamese generative AI spin-off from Vingroup’s VinAI lab, best known for its Vietnamese-language models and mobility systems. What looked like local capacity was, in the end, simply absorbed by a U.S. multinational company. The test, then, is whether Vietnam can transmute this influx of code and capital into sovereign capacity before the licenses and safety regimes around it harden into a new perimeter that encloses — or possibly imprisons — its third stack.

But what is AI sovereignty? A posture, an imperative or a practicality? AI sovereignty, as it currently plays out outside of the U.S.-China vacuum, is not a banner‐waving claim to territorial control; rather, it manifests itself as the quiet right to decide what counts as knowledge and how that knowledge shows up in the world. That is, an epistemological sovereignty. This sovereignty lives in the stack — in the choices about model weights, training data, licensing regimes and cloud dependencies that govern what becomes legible and what remains unseen. AI sovereignty, in practice, is a situated authorship of machine reasoning: an infrastructural claim over how the world is parsed and made actionable. When a polity engineers its own stack, it is in effect engineering an epistemic world of AI, shaping not the raw world itself but the way the world will be disclosed to users, regulators and neighboring states.

The kind of AI sovereignty that the Vietnamese nonalignment model enacts is an act of epistemic refusal through infrastructural design. By refusing to license its perception of reality from OpenAI, AWS or Alibaba Cloud, Vietnam reserves the right to set the horizon of what can be perceived, queried and disputed within its own techno‐social field. The third stack becomes a sovereign entity — a self-authored architecture of appearance. Every domestic corpus curated, every open-weight checkpoint released under a local license, is a clause in an epistemic constitution.

Here, the stakes outrun the vocabulary of “localization” or “self-reliance.” The question is no longer whether Vietnam can train a Vietnamese GPT, but whether it can dictate the contours of Vietnamese reality as machines come to perceive it. In other words, sovereignty is authorship of the perceptual field itself. What the development lexicon still dismisses as “local innovation” is, in truth, a claim to epistemic self-determination.

This is where we see why licensing minutiae, which determine how a model may be used, modified or shared, come into play — and unlike API keys, which typically permit or deny access, licenses articulate regimes of use. They encode norms around attribution, commercial prohibition or modification, transforming technical infrastructure into a site of governance.

Creative Commons “CC-BY-NC” allows others to reuse a model with attribution but bars its commercial use. An open-weight model is not just cheaper; it is epistemically plastic. It can be re-trained, audited, or forked to accommodate dialects, taboos, or regulatory mandates that proprietary code cannot express. The license then ultimately determines who may reshape what AI comes to mean. With generative models, where authorship has shifted from the creator to the system, licensing becomes a mechanism of epistemic control. Who controls a license is not just managing software — they are drawing the perceptual boundaries of the machine.

This bleeds into the policy domain: Exporting stacks is a struggle over cognitive jurisdiction. When the UAE releases Falcon weights  — numerical parameters that shape how a model reasons —  or Indonesia funds Nusantara-centered tokenizers —tools that determine how language is segmented and interpreted — they are exporting a template for how the world will appear to a machine and, by extension, to everyone downstream who relies on that machine’s judgment. Sovereignty travels as epistemic infrastructure long before it surfaces as policy.

“When a polity engineers its own stack, it is in effect engineering an epistemic world of AI, shaping not the raw world itself but the way the world will be disclosed to users, regulators and neighboring states.”

When contrasting proprietary safety regimes with open-weight scrutiny, the fault line is not merely technical — about how code is written or secured — but epistemic: about what knowledge the system encodes, which assumptions it permits, and whose realities it can recognize. The critique is well-rehearsed: mainstream agentic AI is born of surveillance, prying open everything from calendars to encrypted chats and funneling the take through vendor-controlled clouds. Proprietary stacks promise protection but lock the perceptual machinery of AI — the systems that decide what it can see, process and remember, along with their data trails — behind contractual walls. Open-weight models flip that asymmetry with their (partially) auditable code, local fine-tuning and domestic data paths that keep the epistemological workshop at home, enabling the polity that lives with the system to inspect, contest and reshape what it is allowed to know.

The third stack movement is, at heart, a contest over who gets to script the next layer of the world’s intelligibility. As scaffolding, it stands at the threshold of perception, where infrastructure sets the conditions of appearance. Every technical detail — chips, weights, data residencies — reads as a clause in a deeper argument: Sovereignty is the power to decide what appears to machines — and, through them, to the humans and institutions that depend on their judgments — and nonaligned stacks make that power visible precisely because they embody a third way, asserting authorship outside the U.S.-China duopoly.

AI sovereignty here refers less to territorial command than to infrastructural authorship. It names the capacity to decide what kinds of knowledge are encoded, which models speak and under what terms. Rather than a political slogan, it materializes in the stack itself — in weights, training data, licensing regimes and dependencies. In this sense, sovereignty is enacted as a design choice, shaping what becomes visible to machines and, by extension, to societies.

Epistemic Dissonance

This global divergence is producing what we might call epistemic dissonance: not just disagreement about values or governance, but incompatibilities at the level of what can be known, predicted or rendered actionable by AI systems. Each stack encodes a distinct epistemic posture — one that determines how knowledge is structured, what data is treated as relevant and which forms of uncertainty are permitted or pre-emptively excluded.

Proprietary LLMs, for instance, are often trained on vast but opaque corpora: Reddit threads, scraped web content, undisclosed licensing agreements. They are often optimized for scale, fluency and legal insulation rather than contextual fidelity. These models are brittle to local nuance, struggle with underrepresented dialects and tend to encode dominant cultural logics even as they claim universality. By contrast, emerging localized language models — such as those trained in Vietnam, Indonesia or the UAE — often work with culturally specific corpora: state archives, vernacular media, annotated speech from linguistic minorities. Their parameters may be smaller, but their epistemic frame is tighter. They are not just less powerful; they are differently calibrated.

This is not simply a question of bias or inclusion. It is a structural matter: what kinds of questions a model is designed to answer, what counts as a valid input and whose epistemologies are legible within its architecture. We might think of each stack as offering different epistemic affordances — a term borrowed from design to describe the range of actions a system enables or inhibits. Some stacks are built to predict consumer preferences or automate content generation; others are tuned to support governance, translation or educational tasks in linguistically diverse environments. What gets excluded — polysemy, dialect variation, historical opacity — is just as crucial as what gets encoded, because these choices determine whose worlds become legible to machines — and by extension, retrievable to future humans through them — and whose are consigned to obscurity.

Stack governance, too, reflects these epistemic fractures. Proprietary stacks tend to obscure: their weights are closed, their decision-making pipelines buried behind APIs and privacy disclaimers. They produce legibility for the end-user while rendering themselves illegible to regulators and the public. In contrast, open-weight or semi-open stacks may fragment the field: allowing local actors to fork, fine-tune or redeploy models in ways that increase heterogeneity — and with it, epistemic pluralism. But this pluralism comes at the cost of consistency, interoperability and, in some cases, centralized safety oversight — risks that can leave systems unstable, create frictions across borders, and open governance gaps precisely where alignment is most needed.

“The third stack movement is, at heart, a contest over who gets to script the next layer of the world’s intelligibility.”

The result is not a single AI world, but overlapping cognitive infrastructures, each generating its own truths, exclusions, and forms of abstraction. This is what makes Vietnam’s infrastructural improvisations so significant: they reflect not just a geopolitical hedge. Rather, they offer a glimpse into the emergent politics of epistemic design — where building a stack means deciding not only how intelligence works, but whose world it recognizes.

For example, Vietnam’s PhoGPT, a 4B-parameter open-source model, was trained from scratch on a 102-billion-token Vietnamese corpus — including web-crawled news, legal texts, books, Wikipedia, medical journals and more — resulting in a model attuned to Vietnam’s phrasing, governance and linguistic norms rather than Reddit-centric English idioms. In contrast, U.S. models like OpenAI’s GPT-4 are trained on massive English-language corpora scraped from sources such as Reddit, Wikipedia, and licensed publishers, optimizing them for global English fluency but leaving them brittle with local nuance and underrepresented dialects. Meanwhile, Baidu’s Ernie Bot redirects queries on Tiananmen Square toward state-approved historical summaries, reflecting how its stack is calibrated to Chinese state information controls. These divergences show how different stacks literally decide which worlds become legible to machines, and which are erased.

The Rise Of Agents

This dissonance becomes more acute with the rise of AI agents — models that don’t merely answer prompts but pursue goals, make decisions and interact with digital or physical environments on behalf of users. As these agents move from lab demos into real-world workflows — coordinating tasks, navigating interfaces, acting autonomously — the epistemic stakes of stack design deepen.

An agent trained on a proprietary U.S. stack might assume individual agency, default to English-language documentation or prioritize efficiency over negotiation. An agent built on a localized Vietnamese or Indonesian model, by contrast, might be embedded with different priors — attuned to collective coordination, informal hierarchies or context-sensitive constraints. These are not just behavioral quirks. They are epistemic scripts — coded assumptions about what the world is, how it works and how action within it should unfold.

This contest over agentic scripts is already unfolding in China, where the development of local AI agents is accelerating at pace. In the West, attention has pivoted to GPT-4o’s conversational fluidity — and more recently to GPT-5’s benchmark-beating claims — but Chinese companies like Butterfly Effect, Alibaba, Zhipu and ByteDance are building systems that move past chat entirely. These agents execute, rather than merely respond. Designed to eventually interact across tightly integrated app ecosystems, they perform tasks, process forms and coordinate across services with minimal user input. Interfaces are mobile-first, frictionless and oriented around action rather than dialogue.

This divergence is infrastructural rather than merely functional. These agents are trained on domestic data regimes, embedded within governance systems and calibrated to behavioral norms that depart sharply from the design assumptions of U.S.-led stacks. In the Chinese model, the agent is not a synthetic colleague or expressive companion. It is an operative node: a procedural intermediary within a platform stack where commerce, communication and administration blur.

Agents In The Real World

A few notes on the readiness and resistance of AI agents are in order. To understand how agents might materialize in practice, we must first understand the texture of the workflows they are meant to inhabit — and the uneven terrain of digital infrastructure, cloud uptake and process standardization that shapes their integration. AI agents have become the new object of desire in both technical and commercial imaginations. They take initiative, coordinate across systems and promise a shift from reactive tools to goal-driven collaborators. This shift, however, brings its own dissonance — especially as agents begin to traverse real-world workflows.

A parallel tension emerges in Western enterprise circles. In the private equity world, agents are spoken of with urgency (“Must deploy across the portfolio”) or with skepticism (“There’s no measurable ROI”). But both positions flatten a third truth: Most companies are structurally unprepared. Agents that run continuously, interact across siloed systems and make autonomous decisions require foundational upgrades — stable APIs, interoperable data, well-mapped workflows. Without this substrate, autonomy becomes a liability. Systems buckle, provenance vanishes, pilots stall.

As computer scientist Arvind Narayanan observes, technologists often confuse resistance with unreadiness. If the world hasn’t adopted agents at scale, it is not for lack of vision but because most infrastructures were never designed to support continuous, self-initiating computation. And more than that: Most jobs, like most systems, are not reducible to discrete tasks. The hardest-to-automate dynamics are often precisely those that evade formalization — at the edge of instruction, across tacit boundaries.

“Vietnam’s infrastructural improvisations … offer a glimpse into the emergent politics of epistemic design — where building a stack means deciding not only how intelligence works, but whose world it recognizes.”

This is where stack design re-enters. A Vietnamese or Indonesian agent, trained on local workflows, may encode different epistemic assumptions — informal consensus over explicit delegation, ambiguity-tolerant reasoning over strict logic. These differences are not bugs but are adaptations for infrastructural realities. In this sense, nonaligned agents are not just alternatives, but artifacts of situated constraint, designed to operate within locally legible systems.

The task, then, is not to mimic Silicon Valley’s agent paradigm, but to script agency from the bottom up — on top of architectures that can carry it and in languages that local systems can understand. Until then, every claim of intelligent delegation risks producing more opacity than autonomy.

If models encode knowledge, agents execute it. They become emissaries of the stack that spawned them. The question is not just which models get built, but which agents get deployed — and in whose image. In this light, Vietnam’s third stack is more than a hedge against platform dependence: it is a rehearsal for a future in which AI agents — trained locally, governed modularly — enact a worldview not defined by Silicon Valley or Beijing, but by the granular, situated logics of a sovereign digital ecology.

Sovereignty In Pieces

There’s a better question than which bloc Vietnam will choose: it’s who decides what alignment can look like? Most countries will not build an AI stack from scratch; they will adopt, adapt and hybridize — assembling intelligence from components that are not entirely their own. As an infrastructural bricoleur, doing so wires together a stack that belongs to neither hegemon. In the space between black-box dependence and infrastructural refusal, a new sovereignty is taking shape — one weight, one corpus, one fork at a time. The third stack is no local curiosity; it is a preview of how much of the world will build.

The future of AI will not be charted by accelerationist slogans or neatly layered diagrams. It will surface — messy, uneven, tactical — from the friction of adaptation and the patient labor of coaxing disparate systems into dialogue. To read this terrain is to steer between hype and despair, tuning into the pulse of alignment: code splicing into cable, vision bending to vernacular, sovereignty assembled incrementally. Like a signal routed through stray relays, the coming architecture will glow with detours that seldom make headlines yet quietly redraw the map.