Re-Coding Agency: The Planetary Stakes of AI Sovereignty and Shared Governance
Blog Post

Shutterstock
May 1, 2025
Across ministerial offices from Brasília to Beijing, the race for “AI sovereignty” has become the new north star of national technology policy. Yet as our Digital Futures symposium series reveals, what passes for sovereignty in the AI age often amounts to little more than political theater—a carefully constructed narrative where the appearance of autonomy masks the profound dependencies and power asymmetries that define the global AI landscape.
Decoding this illusion reveals a more complex reality. The rhetoric suggests a simple formula: Build a national computing cluster, establish a regulatory framework, declare data localization rules, and sovereignty is achieved. But meaningful technological independence demands mastery across five interlocking dimensions: governance frameworks, computing infrastructure, human capital, algorithmic accountability, and data flows. By this measure, only the United States and China approach anything resembling genuine autonomy—and even they remain bound by complex interdependencies that contradict their nationalist narratives.
For the rest of the world, particularly countries in the Global South, decoding sovereignty reveals more nuanced dilemmas. As Jeremy McKey demonstrates, middle powers face difficult choices in navigating between brain drain and “brain circulation,” between integrating with global AI value chains and risking exploitation of their workforces. The microwork sector offers entry into these chains but often reproduces colonial extraction patterns, creating what we have termed the “janitorial class of AI”—an invisible workforce maintaining the systems that power Silicon Valley’s profits while reinforcing dependencies that contradict sovereignty claims.
The physical substrate of AI sovereignty proves equally problematic. The compute supply chain—from chip design to fabrication to deployment—creates a new form of digital stratification. While American tech giant NVIDIA and its Chinese counterpart Huawei sell the dream of technological independence through national AI infrastructure projects, they simultaneously ensure continued dependence on their proprietary ecosystems. As nations pour billions into these projects, we must ask: Are citizens witnessing sovereignty or merely subsidizing a wealth transfer from public coffers to domestic and international tech elites and empowering authoritarian overreach?
Regulatory frameworks present their own contradictions. As Min Jiang argues, AI algorithms raise unprecedented questions about accountability as they self-modify and create new code. The United States largely delegates governance to industry self-regulation, while the European Union builds an expansive regulatory architecture hoping to achieve global influence, even as China pursues both technological self-sufficiency and assertive state control. Meanwhile, corporate power continues to concentrate, challenging democratic oversight and equitable distribution of AI’s benefits.
The ultimate paradox emerges in data governance. Arindrajit Basu astutely observes that data localization policies—often criticized as internet-fragmenting nationalism—frequently represent rational responses to deep power asymmetries in the global digital economy. When most of the world’s data is stored in U.S. servers and controlled by a handful of tech giants, countries seeking economic development and citizen protection have limited alternatives to localization mandates. Yet these measures alone cannot resolve the underlying inequities.
Taken together, these analyses reveal that the AI sovereignty discourse often masks a more fundamental question: sovereignty for whom? When national technology policies primarily benefit political and economic elites while exposing citizens to algorithmic harms, surveillance, and economic disruption, they perform sovereignty without delivering its substance. As Swati Srivastava argues, meaningful governance requires balancing innovation with responsibility through democratic coalitions, market incentives, risk-based frameworks, and cross-regional solidarity.
The path forward demands a more honest accounting of power—where it resides, how it’s exercised, and in whose interests. Rather than pursuing the mirage of total technological independence, nations might instead focus on strategic autonomy in critical domains while building collective leverage through regional coalitions. The European Union’s regulatory approach and India’s digital public infrastructure strategy present alternative models, yet each carries its own contradictions—the EU risks bureaucratic overreach without building technological capacity, while India’s infrastructure projects may reinforce state surveillance capabilities even as they promote digital inclusion. No perfect template exists, only strategic choices with complex tradeoffs.
For readers navigating this complex landscape, our symposium series offers not just analysis but a decoder—illuminating the genuine power dynamics and hidden dependencies that shape our AI future beyond the rhetoric of technological independence. The question emerges not as whether nations can achieve complete AI sovereignty, but whether they can develop sufficient agency to ensure technology serves their citizens rather than subjugating them to new forms of digital control.
In an increasingly multipolar world order, artificial intelligence is reshaping conceptions of national sovereignty—and vice versa. This piece introduces a series exploring such sovereignty in five key areas: governance, computing infrastructure, human capital, algorithms, and data. Developed from the Planetary Politics program’s 2024 Digital Futures Symposium, it sets the stage for the 2025 gathering in South Africa.
Read the rest of our Who Controls AI?: Global Voices on Digital Sovereignty in an Unequal World collection.