27 Nov Why Should the UN “Govern AI for Humanity”: What is at Stake and What is the Urgency?
[Jimena Sofía Viveros Álvarez is a Mexican international lawyer, expert on Artificial Intelligence, a member of the UN Secretary General’s High-Level Advisory Body on AI, a Commissioner for the Global Commission on Responsible Artificial Intelligence in the Military Domain, and an expert on AI for the OECD.]
The origins of artificial intelligence (AI) date back to 1956. However, within the last decade it has evolved exponentially and is increasingly being incorporated into all aspects of our lives. Today, it is clear AI will have a major global impact that will redefine the future of humanity.
AI could be a driver for positive change, as it has the potential to spark innovation, enhance data-driven decision-making and boost the progress of the United Nations (UN) 2030 Agenda’s Sustainable Development Goals (SDGs).
However, AI’s crescent popularity has already induced an unprecedented surge in machine learning applications. Accordingly, States and industry alike have intensified their efforts to develop increasingly sophisticated systems at an exponential rate and without the necessary guardrails, creating a de facto arms race for AI superiority. These dynamics are reminiscent of the Cold War, with States competing for technology supremacy, with the novelty that this time industry is also participating.
The lack of regulations as AI’s development continues to improve, granting higher degrees of autonomy to virtually any hardware, alerts us about the possibility of overall control escaping from humans. In this regard, the technology’s black-box conundrum, which renders the system unpredictable and unexplainable, as well as its inherent biases, brittleness, hallucinations, and misalignments, could lead to catastrophic and existential risks for humanity.
The unrestrained development, deployment and use of AI poses several risks to humanity and, particularly, to vulnerable groups, such as children, women and girls, and minorities, especially in the Global South. Furthermore, its risks in different critical civilian fields which have a direct impact on human rights, such as the law enforcement, administration of justice, and border control, environment, are already materializing.
In the military domain, AI’s implications to international peace and security, have been discussed by the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS) of the Convention on Certain Conventional Weapons (CCW) for almost a decade now. Additionally, other use cases, such as AI-Decision Support Systems have recently attracted attentions, as these technologies risk lowering the threshold of the use of force and raise challenges for compliance with international humanitarian law as well as human rights violations.
In either case, one of the major risks of AI is its inherent general purpose nature, which could be exploited by non-State actors, such as organized crime and terrorist groups to carry out illicit activities, as everyone has easy access to this technology via open-source models.
In this context, to successfully harness AI’s opportunities for positive change, whilst mitigating its risks, we need a holistic overarching global governance. This should encompass, not only its technical aspects, but also the ethical, legal and societal dimensions that are impacted by this enabling technology to guarantee that its design, development, deployment and use are for the benefit and protection of all of humanity.
The need for a global approach – the UN’s Role in AI’s Governance
Efforts to govern AI have been ongoing throughout the last decade, yet, these have been typically slow paced. Nonetheless, due to the exponential evolution of the technology in the last year, an amplified sense of urgency has grown, which is reflected in the increasing number of governance initiatives.
Currently, there is a rich yet diverse ecosystem of multiple regional and multilateral efforts towards the governance of AI lato sensu, including the United Nations Secretary General’s High-Level Advisory Body on AI (UN-HLAB); the G7 Hiroshima Process; the G20’s Guidelines; the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; the European Union AI Act; the Association of Southeast Asian Nations Guidelines for AI; the African Union’s Continental AI Strategy; the Global Partnership on AI New Delhi Declaration; the Organization for Economic Co-operation and Development’s (OECD) AI Principles; the Bletchley Declaration on AI Safety; Seoul’s Declaration for Safe, Innovative and Inclusive AI; the UN Resolutions on “Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development”; and “Enhancing International Cooperation on Capacity-Building of Artificial Intelligence”, as well as several ethical instruments, including the UNESCO’s Declaration on the Ethics of AI; the Santiago Declaration to Promote Ethical AI in Latin America and the Caribbean; and the Rome Call for AI Ethics.
Regarding the military domain, we have the Guiding Principles of the GGE on LAWS of the CCW; the Caribbean Community’s Declaration on Autonomous Weapons Systems; the Global Commission on Responsible Artificial Intelligence in the Military Domain; the United States-led Political Declaration on Responsible Military Use of AI and Autonomy; the Joint Call for Action by the UN Secretary-General and the President of the International Committee of the Red Cross for States to Establish New Prohibitions and Restrictions on Autonomous Weapon Systems; as well as the UN General Assembly’s Resolution 78/241 on Lethal Autonomous Weapons Systems, whichstresses the urgent need to address the challenges posed by these technologies and submit a Report reflecting the views of States, international organizations and civil society, of which an advanced draft was published recently.
Although the existing patchwork of efforts are welcome developments and pave the way forward, the diversity and divergence amongst them is itself a governance risk as they lead to fragmentation. Moreover, they are not inclusive or global, and have overlaps, contradictions opening gaps which could be exploited for malicious purposes, ultimately undermining the global legal order. Furthermore, AI has an intrinsically transboundary effect which necessitates a coherent and harmonized global response adaptable across different contexts and is representative of different regional perspectives. Yet, the Groups of States behind these initiatives, also highlight a dire reality: countries from the Global North predominate in the regulatory sphere, whereas Global South States, more often than not, are being confined to a passive observer role, both in terms of development of AI and its regulation.
In light of the above, it is clear that the UN is uniquely positioned to serve as a truly inclusive convening platform to foster global governance of AI.
First of all, it is composed of 193 Member States which makes it the international platform with the largest and most universal representation, allowing voices from all over the world to address their context specific concerns and be heard in an even plain field. Moreover, its unparalleled legitimacy offers an opportunity to promote trust among politically divergent States and ensuring equal footing between nations. Additionally, it could serve as a facilitator to reach consensus, and/or foster regulatory harmonization amid all relevant initiatives.
Moreover, current governance efforts and existing frameworks typically exclude military use-cases from its purview, with the aim to regulate only the civilian domain, or at least “first”. This division is often incentivized by policy-makers from certain States who consider that security matters should be addressed at their national level only.
Nonetheless, it is the author’s view that this segregation is illusory and a pragmatic impossibility as AI’s dual-use nature intrinsically blurs the line between civilian and military applications, rendering it unfeasible to dissociate one from the other. Ultimately, the lack of a coherent and integral regulation facilitates the misuse or abuse of the repurposability of AI systems by non-State actors, which clearly entail transboundary spill-over effects, thereby threatening international peace and security.
Recalling the San Francisco Conference of 1945, the promotion of peace and justice was at the center of the discussions and therefore embedded in the Preamble and throughout the UN Charter, including in its purposes (Article 1), and as a core element of Chapters VI and VII. Thus, in the author’s firm opinion, it should still be at the core of any global governance regime, especially those of disruptive new and emerging technologies, such as AI. Therefore, the first step to reap any of its potential benefits, we must first focus on preventing and mitigating its inherent risks since “[t]here can be no sustainable development without peace”, as recognized by the Deputy Secretary-General, Amina Mohammed.
Last but not least, most of the existing instruments are not legally binding, i.e., they are “soft law”, and while they enrich the overall governance ecosystem, they are imperfect by design as their implementation and compliance relies on the good will of States -and Big Tech leaders- lacking any means of enforceability and/or accountability, since there is no specialized and centralized authority to bring about universal coherence.
This absence of enforceable governance mechanisms has caused the tech industry’s self-regulation to be prevailing today, leading to notorious conflicts of interests advancing unjustly flexible “commitments”.
In this regard, it is imperative for the international community to come together and collaborate on building a binding and enforceable regime of global AI governance, establishing clear red-lines and strict regulations to prevent and mitigate its risks to establish positive obligations paired with accountability and remedy mechanisms.
This framework could begin by operationalizing the principles of inclusiveness, transparency, explainability, impartiality, trustworthiness, fairness, reliability, robustness, security, privacy protection, safety, adherence to human rights, and accountability, for which there is already wide consensus derived from a myriad of initiatives such as the UNESCO’s Recommendation on the Ethics of AI, OECD’s AI Principles, the Hiroshima Process, the Rome Call and the UN-HLAB’s Interim and Final Reports.
To achieve this, binding norms should include verifiable requirements to ensure AI is safe-by-design, concrete export and import regimes to prevent any State or non-State actor from acquiring unsafe models and mechanisms to ensure accountability throughout its life-cycle, including its design, development, deployment, use and decommission.
The case for an International AI Body
The author advocates that it is utmost necessary to build consensus to create a specialized, universal and centralized authority endowed with a robust mandate to be a global convening platform for norm-setting, and with adequate capabilities to verify, monitor and even enforce their implementation and compliance, preventing the blind pursuit of technological advancements in all domains which could endanger the future of humanity.
Back in June 2023, UN Secretary-General, António Guterres, backed up the idea of an International AI Agency, which was reflected in his New Agenda for Peace. There, he called for a tailored approach, including “the possible creation of a new global body to mitigate the peace and security risks of [AI] while harnessing its benefits to accelerate sustainable development”.
In this regard, the Interim and Final Reports of the UN Secretary General’s High-Level Advisory Body on AI, and of which the author is a Member, discussed several institutional functions which could eventually be performed by an International AI Body. These include the assessment of current and future capabilities to provide a common understanding of the technology and its implications. Additionally, the Body should be able to assist States with the implementation of standards and norms into their domestic legal systems, and capacity building, especially in developing countries, so they are prepared for AI’s differentiated impacts in their respective contexts.
In the author’s view, this International Body could serve as a convening point to coordinate and harmonize existing principles, ethical guidelines and soft law regulations, as well as overseeing compliance with any international agreement on this topic. Its concrete form and functions should draw from the vast experience and lessons learned from other existing models within the UN structures and also from the broader international ecosystem alike.
For instance, the Intergovernmental Panel on Climate Change’s model could provide some highlights on how to build consensus on the technology and its implications.
On the other hand, the International Civil Aviation Organization, or the International Maritime Organization, could serve as the basis on how an AI agency could identify, promote and harmonize common norms and standards.
Additionally, the International Atomic Energy Agency or the Organization for the Prohibition of Chemical Weapons, which follow a dual-action approach (i.e., recognizing the technology’s substantial opportunities, while addressing its dangers across domains), provide guidance on mechanisms to prevent and mitigate AI’s risk, without hindering technological progress.
In this regard, the author argues that an International AI Body, should serve as the central authority to harness and facilitate coordination among different initiatives, ensuring a cohesive and holistic governance approach, avoiding further fragmentation and ensuring homogeneous compliance with a common global governance framework for AI. In doing so, it could assist in identifying red lines, as well as coordinating the development and implementation of harmonized standards and norms, to ensure AI is a driver for good whilst establishing clear-cut obligations for States and the private industry alike, guaranteeing the technology can be used safely throughout its entire life cycle.
And the argument goes further, as most importantly, it indispensably should be bestowed with enforcement, monitoring and verification functions, so that this suggested Body could serve as a “Guardian”, (i.e., an Institution that oversees the compliance and implementation of the applicable regulatory frameworks). This could help deter, and have coordinated response capacity, to the destabilizing uses of the technology by any actor, whilst also establishing effective reporting procedures and early warning indicators for any AI incident or related hazards. This regime should also detail appropriate accountability mechanisms for all actors throughout the entire life cycle of AI systems.
In any case, as a “Guardian”, its mandate should include the prevention and mitigation of AI risks, whilst ensuring that its benefits and opportunities are fairly shared among all of humanity, building more equitable and resilient AI ecosystems, thus bridging rather than widening, the digital divide.
As a citizen of the world and the Global South, the author affirms that this International Body would effectively remedy the legal voids in AI’s global governance in an equitable, inclusive manner, to address the current lack of enforceability and accountability mechanisms previously mentioned, which are quintessential to safeguard universal peace and security vis-á-vis the inherent dual-use risks of these technologies.
Conclusions
It is undeniable that we are at a crossroads for our species, facing the defining technology for the generations to come.
With the adoption of the Pact for the Future and Global Digital Compact at the Summit of the Future last September, we are privileged with unique momentum to reinvigorate proactive multilateralism, reaffirm our commitments to existent norms and agree on concrete and binding solutions to mankind’s greatest challenge so far.
Particularly, the implementation phase of the Global Digital Compact will be essential in defining the future of AI, as one of its core objectives is the reinforcement of international governance of AI. Moreover, the UN High-Level Advisory Body on AI published our Final Report, with concrete Recommendations for the Global Governance, launched on 19 September 2024, in the context of UNGA 79.
Nevertheless, without the appropriate “muscle and teeth”, any form of normative or institutional governance would be moot. In this context, States must seize this catalyst junctureto achieve a global and interoperable regime that ensures AI’s governance can materialize to be effective and adaptable across contexts and domains.
The time is ripe for the international community to come together and transform all the aforementioned patchwork of initiatives into globally binding commitments and norms, which, by mere congruence, necessitates the creation a universal “AI Guardian” in the form of a specialized International Body, with the appropriate implementation, monitoring, verification, enforcement, accountability, remedies for harm and emergency response previsions and provisions to effectively embody the governance of AI for the benefit and protection of all of humanity.
Following the Secretary’s General New Agenda for Peace, and in the aftermath of the Summit of the Future, the author remains optimistic as the consensus reached is a great milestone on the path to AI’s global governance and thus towards the creation of an international agency, and strengthened trust in the international legal order at this crucial moment in history.
Leave a Reply