How Complexity Thinking Can Help the World Navigate AI
A Paper from the Study Group on Complexity and International Relations
Policy Paper
Olena Pylypuha / Shutterstock.com
April 29, 2024
A new paper from New America, Princeton University, and Arizona State University examines how complexity thinking can help make sense of advanced artificial intelligence (AI) and inform policy to encourage the benefits and mitigate the risks of the technology.
In a 2000 interview with the San Jose Mercury News, the theoretical physicist Stephen Hawking was asked, “Some say that while the twentieth century was the century of physics, we are now entering the century of biology. What do you think of this?”
Hawking replied: “I think the next century will be the century of complexity.”
Hawking was referring to the science of complex adaptive systems, the study of the dynamics that govern any sufficiently large collection of interacting agents, whether those be cells in the human body or ants in a colony or humans in the global economy.
Complexity is a useful lens for making sense of reality. It explains how patterns of cooperation and conflict emerge and why certain outcomes are surprising and unpredictable. But can it also help inform policymakers as to specific strategies, tools, and mechanisms for governing global challenges?
To explore this question, New America, Princeton University, and Arizona State University, with support from the Rockefeller Foundation, convened the Study Group on Complexity and International Relations. Composed of distinguished scholars and practitioners in the fields of evolutionary ecology, computer science, international relations, and financial risk among others, the group met for a workshop to study how insights from complexity can help inform global governance of advanced AI.
The resulting paper describes relevant aspects of complexity thinking, evaluates the distinct challenges of governing AI systems, and offers recommendations related to access and power, international relations and global stability, and accountability and liability. The paper does not provide a comprehensive analysis of international institutions for AI governance. Rather, it suggests ways that an emerging field of science can contribute to global stability and equality in the age of AI.
This paper was authored by Gordon LaForge, Anne-Marie Slaughter, Simon Levin, Adam Day, Allison Stanger, Ann Kinzig, Stephanie Forrest, Bruce Schneier, Cristopher Moore, Kevin O’Neil, Moshe Vardi, Nazli Choucri, Robert Axelrod, Sihao Huang, Steve Crocker, Tina Eliassi-Rad, Nick Silitch, and Merle Weidt.