
15 Oct The Global Risk in Trump’s AI Action Plan and the Need for Governance
[Craig Martin is a Professor of Law and Co-Director of the International and Comparative Law Center at Washburn University School of Law in the United States.
Professor Michael J. Kelly holds the Sen. Allen Sekt Endowed Chair in Law at Creighton University and co-chairs the American Bar Association’s Task Force on Internet Governance.]
There has been considerable analysis of President Trump’s AI Action Plan and related Executive Orders issued in late July (here, and here), as well as discussion of the Chinese AI Action Plan released shortly thereafter. While these have considered many of the implications for national security risks, there has been little attention on how the plan amplifies the biggest risk of all—that of the emergence of Artificial General Intelligence (AGI) in circumstances in which the developers are not fully prepared to contain it.
Although some may think there is a low probability of AGI emerging in the next few decades, the potential harm it could cause is enormous, making the risk significant. An appeal for action on this risk was made at this year’s United Nations General Assembly, calling for action to develop “red lines” on AI development before the “window for meaningful intervention closes.” We agree.
In this post we examine, first, how Trump’s AI Action Plan radically increases the risk posed by AGI by sparking a nationalistic competition in AI development, and by deliberately undermining domestic and international AI regulation; and second, why and how it is increasingly urgent that we develop global governance structures designed to address the specific risks associated with AGI.
The AI Action Plan – National Dominance & Anti-Governance
President Trump’s AI Action Plan calls for a global AI arms race. The plan seeks to establish America’s unchallenged global dominance over the developing AI technology, which it argues is essential for national security, economic prosperity, and scientific advancement. The Plan and accompanying three Executive Orders (EOs) aim to achieve this American technological dominance in AI through three primary objectives or pillars. The first, accelerating AI innovation and development, includes an emphasis on de-regulation, reducing the burdens of governance, and encouraging the development and use of open-source models and wide access to advanced computing power.
The second, building up U.S. AI-related infrastructure, is less relevant to the AGI risk. But the third, AI diplomacy and security, focuses on promoting the export of America’s “full AI technology stack” (meaning its AI hardware, models, software, applications, and standards) to allies and partners – thereby ensuring the adoption of American AI standards and technology abroad, while at the same time denying access to critical AI compute capability and manufacturing components to adversarial countries, specifically targeting China.
This emphasis on American dominance over the rest of the world in general and China in particular obviously contributes to an arms race. But many other aspects of the plan will also exacerbate unbridled competition and ungoverned development. The emphasis on driving innovation across public and private sectors, combined with the impetus for developing open-source and open weight systems, as well as investing in the provision of wide access to high-powered computing capabilities and exporting components of American AI technology, will all further increase both the opportunity and fuel for competition. China’s response to Trump’s AI Action plan, while ironically calling for greater cooperation, further reflects this dynamic and shared perception of national competition.
The second way in which the Plan increases the risk of AGI is through its emphasis on reducing regulatory and governance constraints. The Action Plan asserts that the drive for innovation requires deregulation. This includes rescinding Biden EO 14110, which sought to impose safety and security standards, including reporting requirements, on all developers of AI systems. The Plan requires that all Federal and State regulations that hinder the development of AI are to be identified and repealed.
Under the third pillar, the Action Plan calls for the countering of Chinese influence in international governance bodies, and criticizes the overly “burdensome regulations” and “vague ‘codes of conduct’” advanced by international organizations like the U.N. It recommends that the U.S. “vigorously advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence.” All of this undermines the fledgling efforts towards developing global AI governance (discussed below), which in turn increases the risk that AGI may emerge in conditions that are entirely unprepared to contain it.
The AGI Risk & Need for Governance
What are the risks associated with AGI? There are many in the AI world (see chapter 5) who consider the risks associated with it to be over-hyped. Conversely, many others, according to a famous 2022 survey, put the risk of AGI posing an existential threat to humanity as high as 10%. One general study of such existential threats ranked non-aligned AGI as the single greatest risk to humanity over the next century. Most recently, a new research paper titled “AI 2027,” released by the AI Futures Project, caused a significant stir by predicting that “superhuman AI” will emerge within this decade, become increasingly autonomous and able to control agents in the real world, while also becoming insinuated into all aspects of government and industry. In this analysis, the AGI becomes adversarially misaligned with the interests of humanity, leading to a future in which we are unable to control the AGI and are subject to growing catastrophic risks. One “race” scenario leads to the extinction of humanity.

While particulars of the AI 2027 study may seem farfetched, many are welcoming the extent to which it has re-focused attention on the risk posed by AGI. For even if the probability of AGI emerging in the next few decades is low, and of such AGI developing interests that are misaligned with those of humanity (the infamous “alignment problem”), the magnitude of the potential harm is extreme. An AGI that is far more intelligent than humans across the entire problem-solving spectrum and has full access to the Internet and everything connected to it, and is thus able to manipulate “agents” in the real world, would pose a clear existential threat. As Nick Bostrom’s now-famous paper-clip thought experiment illustrates, even an entirely benign AGI that develops objectives inconsistent with those of humanity would likely threaten our existence. As the title of a new book by two AI experts puts it, “If Anyone Builds It, Everyone Dies.”
Many thus argue that research and development on AGI, or indeed on any frontier AI models that run the risk of crossing the threshold of general intelligence, should be conducted in the most secure and air-gapped conditions. These would be analogous to the Biosafety Level 4 laboratories dedicated to working with virulent pathogens, or national security SCIFs. One key aspect of the risk raised by the Plan is that in the context of an unregulated arms-race to develop ever more powerful AI models, some state or corporate entity is going to develop a model that attains AGI in open and “connected” conditions, taking us inadvertently across the AGI Rubicon. And even if AGI is first developed in secure conditions, it still poses real risks – prominent AI critics such as Eliezer Yudkowsky have long argued that even in secure conditions an emergent AGI could trick us into letting it “out of the box.” Once a misaligned AGI is “in the wild,” it will be too late. It would be analogous to the release of a highly virulent and lethal pathogen for which there is no possible vaccine.
The precautionary principle, now widely recognized as a fundamental principle in international environmental and climate change law (see the ICJ’s Advisory Opinion on the Obligation of States in Respect of Climate Change, paras. 255, 293-94), posits that when faced with a risk of enormous harm, one should take the necessary action to reduce that risk regardless of any lingering scientific uncertainty regarding the probability that it will materialize.
The precautionary principle is surely applicable here, requiring that we take steps to develop some form of international governance structure to regulate aspects of research, development, and deployment of frontier AI models, including the establishment of protocols governing the conditions under which such research is undertaken. The question, then, is what form such global governance structures should take.
Global Governance Models
“Global governance” is a term that has a range of meanings, with very different theoretical understandings across different disciplines. Even in practical application, efforts to regulate other serious threats to humanity, from nuclear weapons testing and proliferation to biological and chemical weapons, cyber-operations, holes in the Ozone layer, and the causes of climate change, have employed a range of different methods and mechanisms. These differences reflect a spectrum of governance structures that include:
- “hard law” treaties with binding legal obligations (such as the Nuclear Non-Proliferation Treaty regime or the Chemical Weapons Convention system);
- “soft-law” approaches (such as the governance of cyber-operations, in the form of the Tallin Manual and such entities as the U.N. Open-Ended Working Group);
- intermediate approaches (such as the United Nations Framework Convention on Climate Change (UNFCCC) and the Paris Agreement).
The differences in approach may be explained in part by the variation in both the nature of the risk being addressed and the value of the interests that are threatened by the constraints imposed by the system. The greater the interests being constrained, and the more diffuse or uncertain the risks at issue, the more difficult it is to get agreement on binding obligations. This resistance is further exacerbated as verification of compliance and enforcement becomes more difficult.
Some of these earlier governance efforts provide useful examples to consider, but none of them provide models that are perfectly suited to dealing with the unique challenges posed by any effort to contain the risks of AGI – where the interests at stake may be perceived to be extremely high, the risk is very uncertain, and verification and enforcement would be extremely difficult.
There are some key initiatives that already seek to establish greater governance over AI. Currently in place are the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law and the European Union Artificial Intelligence Act, both established in 2024. But neither addresses AGI specifically. There are also the G7 Hiroshima AI Process from 2023, which lays out a code of conduct for organizations developing advanced AI systems, and the Global Partnership on AI. The United Nations High Level Advisory Body on Artificial Intelligence published its final report “Governing AI for Humanity,” in 2024, which noted the “governance deficit” with respect to the development and deployment of AI and the patchwork nature of such governance that does exist, emphasizing the urgent imperative for the development of global governance to address the varied risks posed by AI.
An AGI-Specific Global Governance Structure
One problem with these global efforts to better regulate the development and deployment of AI is that they are aimed at addressing the many and quite varied risks that AI poses more generally, while also trying to facilitate the positive opportunities that AI development may afford. As the UN High Level Advisory Body on AI report emphasized, that range of risks is extremely broad, including everything from copyright interests to the creation of threats to international peace and security in various forms, and contributing to the climate crisis. Governance efforts are trying to grapple with all these simultaneously, implicating a wide range of interests as well, which can vastly complicate the process. But the particular risk we are discussing here, that posed by AGI, requires a very specific form of governance.
Our aim here is to amplify the view that there is an urgent need for a separate and specific global governance system to address the risk posed by AGI. While there are myriad objections and obstacles to developing legally binding global governance structures to regulate all the various risks and opportunities associated with AI, a narrow legally binding system aimed at regulating at least the conditions under which frontier AI and AGI models are developed, and imposing reporting and verification obligations, might be feasible. The governance system most often raised as a possible model for emulation (see here, and here), is that of the nuclear non-proliferation regime, with the International Atomic Energy Agency (IAEA) providing inspection and verification functions.
The monitoring, verifying, and enforcing constraints on AGI development would be far more difficult than in the regulation of nuclear weapons development. The governance of AGI would share many of the same problems confronting the regime governing biological weapons, as the monitoring, verification of compliance, detection of cheating, and distinguishing dual-use functions from prohibited activity, are all extremely difficult. and yet there is a Biological Weapons Convention, to which 189 states are party; and while the regime suffers from insufficient institutional support and enforcement mechanisms, the convention imposes binding obligations regarding the development, acquisition, or use of biological weapons, and states do engage in periodic reporting and other confidence building measures to reduce the risks. The regime has arguably succeeded in significantly moderating the risk posed by biological weapons.
Given the enormity of the potential harm and the relatively narrow interests that would be implicated (the rest of AI development and deployment would not be subject to this governance regime), it is surely feasible to begin working towards a convention to govern AGI development. The institutional framework provided by the G7 Hiroshima Process or the GPAI could serve as institutional framework for negotiations. The process could commence with the establishment of a framework convention, followed by a protocol providing more specific binding constraints, following the example of the UNFCCC and Paris Agreement. It could also begin as a “mini-lateral” agreement, involving only the states already leading in AI development, as was done with the Montreal Protocol on the Ozone, and the nuclear weapons Partial Test Ban Treaty.
These are just preliminary ideas, of course, but our central argument is that the need for action is urgent, and that the issue should be receiving far more attention than it is.
Conclusion
Leaving aside all the other critiques of the AI Action Plan, the way it amplifies the risk posed by AGI alone should cause considerable apprehension and concern. Ironically, China responded to Trump’s plan in late July by calling for greater cooperation in international AI governance. Beijing’s stance is very much a role reversal of traditional diplomatic positions between the U.S. and China—the U.S. has typically championed multilateral international cooperation, and China has more often rejected such efforts. The U.S. should not abandon its international leadership so easily.
Rather than focusing on the narrow short-term benefits to be gained through national AI dominance and undermining global governance efforts, the U.S. should be leading the world in responding to the risks posed by AGI development. Doing so would surely be in its longer-term interests. More importantly, the world needs to bring the United States back into the process—the risks posed by AGI simply cannot be managed without the United States. America’s failure to join The League of Nations contributed to World War I, and the harm threatened by the development of AGI, improbable as it may seem, dwarfs the horrors of that war.
Leave a Reply