In 2025, there will be a correction in AI and geopolitics, as world leaders increasingly understand that their national interests are best served through the promise of a positive and cooperative future. more cooperative.
The post-ChatGPT years in AI discourse can be seen as somewhere between a gold rush and a moral panic. In 2023, at the same time as record investment in AI, technology experts, including Elon Musk and Steve Wozniak, published an open letter calling for a six-month moratorium on training systems. The AI system is more powerful than GPT-4, and others compare AI to a “nuclear war” and a “pandemic”.
This has understandably clouded the judgment of political leaders, pushing the geopolitical conversation about AI into some worrying points. At the AI & Geopolitics Project, my research organization at the University of Cambridge, our analysis clearly shows a growing trend towards AI nationalism.
For example, in 2017, President Xi Jinping announced plans to make China an AI superpower by 2030. The Chinese “New generation AI development plan” aims to bring the country to “world class” in AI innovation by 2025 and become a major AI innovation hub by 2030.
The Science and CHIP Act of 2022—a ban on U.S. semiconductor exports—is a direct response to this problem, designed to favor domestic U.S. AI capabilities and limit Chinese regime. In 2024, after the executive order signed by President Biden, the US Treasury also announced a draft regulation banning or restricting investment in artificial intelligence in China.
AI nationalism portrays AI as a battle to be won, not an opportunity to be exploited. However, advocates of this approach would learn deeper lessons from the Cold War than the notion of an arms race. At that time, the United States, while striving to become the most technologically advanced nation, used politics, diplomacy, and public administration to create a positive and aspirational vision for space exploration. Successive US administrations also tried to gain support at the United Nations for a treaty protecting outer space from nuclearization, stipulating that no nation could colonize the moon and ensuring that space is “the province of all humanity”.
Similar political leadership has been missing in AI. However, in 2025 we will start to see a shift back towards cooperation and diplomacy.
The AI Summit in France in 2025 will be part of this change. President Macron reframed his event away from a strictly “safety” framework on AI risks and toward one that, in his words, focuses on pragmatic “solutions and standards” than. In his online speech at the Seoul Summit, the French President made clear that he intends to tackle a broader range of policy issues, including how to truly ensure society benefits from AI. .
The United Nations, acknowledging the exclusion of some countries from the debate around AI, has also announced its own plan for 2024 to move towards a more collaborative global approach.
Even the US and China have begun to get involved expected diplomacyestablish a bilateral consultation channel on AI by 2024. While the impact of these initiatives remains uncertain, they clearly indicate that, by 2025, the world's AI superpowers will likely pursue diplomacy instead of nationalism.