Understanding the Scope of the Council of Europe Framework Convention on AI

Understanding the Scope of the Council of Europe Framework Convention on AI

[Karolína Babická is a Senior Legal Adviser at the  International Commission of Jurists. Cristina Giacomin is a Legal Intern at the International Commission of Jurists]

Overview

In the past two years, the Committee on Artificial Intelligence (CAI) of the Council of Europe (CoE) has worked on the Framework Convention (FC) on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

The final text of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law was adopted on 17 May 2024. The Convention was opened for signature on 5 September 2024. Among the first signatories are the European Union (EU), the United States, the United Kingdom, Israel, Norway, Georgia, Moldova, Iceland, Andorra, and San Marino.

The AI Framework Convention was drafted by representatives of the 46 CoE Member States, with the involvement of other States and organizations. Observers included the EU and various non-member States (Australia, Argentina, Canada, Costa Rica, Israel, Japan, Mexico, Peru, the Holy See, the United States of America, and Uruguay). The Convention is open for signature by Council of Europe Member States, the European Union, and non-member States that participated in its development. Other non-member States may accede to the Convention by invitation once it has entered into force, subject to unanimous consent from the Parties to the Convention following consultations by the Committee of Ministers of the Council of Europe (Article 30-31). Representatives from civil society, academia, industry, and international organizations were also involved in the development of the Convention. 

The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law aims to ensure that activities throughout the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy, and the rule of law (Article 1). The Convention further addresses definitions, scope, and the general obligations of States in terms of protection of human rights, democratic processes, and respect for the rule of law. Chapter III (Articles 6-13) identifies principles for the implementation of activities. For instance, it introduces the principles of equality and non-discrimination, the principles of human dignity, transparency, accountability, and safe innovation. Chapter IV (Articles 14 and 15) is dedicated to remedies and Chapter V (Article 16) focuses on risk and impact management. Chapter VI (Articles 17-22) concerns the implementation of the Convention and includes provisions such as Article 18, which covers the rights of children and persons with disabilities, or Article 20 on digital literacy and digital skills. Finally, Chapter VII (Articles 23-26) establishes follow-up mechanisms and cooperation and introduces an obligatory monitoring mechanism under Article 26. 

The Scope of the Convention on AI

The scope of the Convention is established in its Article 3, stating that the scope of the Convention 

(…) covers the activities within the lifecycle of AI systems that have the potential to interfere with human rights, democracy and the rule of law (…) 

a) undertaken by public authorities, or private actors acting on their behalf (…),

b) Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph a in a manner conforming with the object and purpose of this Convention. 

Each Party shall specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession, how it intends to implement this obligation, either by applying the principles and obligations set forth in Chapters II to VI of this Convention to activities of private actors or by taking other appropriate measures to fulfil the obligation set out in this subparagraph. Parties may, at any time and in the same manner, amend their declarations. (…)

Article 3.1 is the result of conflicting views and political pressures. The CAI’s mandate defines the Convention as a legally binding instrument with a transversal character, and the Explanatory Report highlights the intention to make the Convention binding on both public and private actors. On the other hand, media reports indicate that some States, including the United States, the United Kingdom, Japan, and Israel advocated for excluding the private sector and limiting the scope of the Convention to activities of public authorities and private actors acting on their behalf. Both approaches have been criticized by various actors including civil society organizations, Council of Europe bodies, EU institutions, and UN representatives.

The Convention applies to public authorities or private actors acting on their behalf, but it does not cover the activities of private actors by default. Regarding other private actors, State Parties enjoy great flexibility – they can decide, through a declaration, how to address risks and impacts arising from activities within the lifecycle of AI systems. In particular, they may choose to apply the rules and principles set out in the Convention, or take other appropriate measures to address the risks and impacts arising from AI activities. However, neither the Convention text nor the Explanatory Report clarifies what these “other appropriate measures” can be or what concretely this obligation consists of. This lack of clarity can be detrimental to the certainty and predictability of the law, jeopardizing the overall aim of the Convention, which is to ensure the consistency of all the activities within the life cycle of AI system with human rights, democracy and the rule of law. The risk appears even more serious considering the important role the private sector plays in the development and deployment of AI systems.

Article 3.2 provides for a blanket exemption from the scope of the Convention for “all activities within the lifecycle of artificial intelligence systems related to the protection of national interests.” This is the first of three exemptions introduced in the final text of the AI Framework Convention on Artificial Intelligence. The Explanatory Report clarifies that this exemption applies to all activities concerning national security interests, regardless of the type of entities involved.

This paragraph introduces a broad exemption for all activities related to national security. Even if the paragraph refers to the need for compliance with the international human rights framework, States may still try to avoid their human rights obligations using national security reasons. Indeed, the use of national security arguments to avoid the respect of human rights in the AI field has already been documented by NGOs and UN Special Rapporteurs.

A blanket exemption based on national security reasons is not in line with the European Court of Human Rights’ (ECtHR) more cautious approach in balancing different interests and rights. The European Convention on Human Rights (ECHR) recognizes the possibility of restricting certain rights, in particular, under Article 6 (Right to a fair trial), Article 8 (Right to private and family life), Article 9 (Freedom of thought, conscience and religion), Article 10 (Freedom of expression), Article 11 (Freedom of assembly and association), and Article 2 of Protocol No. 4 recognizes national security as a legitimate reason for restriction. However, Articles 8-11 ECHR allow limiting rights only when prescribed by law and necessary in a democratic society. The Court usually assesses whether a restriction of a right is legitimate, necessary, and proportionate to the legitimate aim pursued. The ECtHR has also ruled on various cases in which States have invoked national security reasons to reduce human rights protection, especially in the context of the fight against terrorism. In this regard, the Court has not always adopted a consistent position in its jurisprudence, at times granting a broader or narrower margin of appreciation to Member States.

The third paragraph of Article 3 imposes a significant limitation to the scope of the AI Framework Convention, by excluding “research and development activities related to artificial intelligence that are not yet available for use.” This provision does not allow for an initial control on the compliance of the technologies with human rights and it can lead to the creation of technologies that are not compliant with human rights by default. 

This exemption contrasts with the original version of the AI Framework Convention that explicitly recognized the application of the Convention to the design and development phases of AI systems.  The Convention also does not clarify how to identify potentially harmful activities. As a result, the text leaves space to legal uncertainty that might lead to misinterpretation regarding the application of this provision.  

Article 3.4 excludes all “activities related to national defence” from the scope of the Convention. This absolute exemption for AI systems used in national defence creates a considerable gap in human rights protection. States could easily use reasons of national defence to justify human rights violations as in the case of national security. This is particularly worrying considering the extensive use of AI systems in the development and production of autonomous weapons. However, this exemption under the Convention does not mean that the CoE human rights framework does not apply to national defence activities. 

Article 3.4 draws inspiration from Article 1 (d) of the Statute of the Council of Europe. Indeed, issues concerning sovereign matters such as defence/military expenditure, weaponry, alliances, the conduct of military operations, or political aspects of defence fall out of the CoE’s competences. However, under Articles 1 (a) and (b) and Article 3 of the CoE Statute, the Council of Europe has jurisdiction over all matters related to human rights protection, the protection of democratic institutions and accountability mechanisms, and the respect for the rule of law. Article 1 (b) of the Statute extends this competence to agreements and common action in economic, social, cultural, scientific, legal, and administrative fields. 

The current Article 3.4 also excludes systems developed for military purposes, even if they are developed exclusively for such purposes. Technological advances often begin in military contexts before later finding a civilian application, sometimes unexpectedly. More importantly, the fact that the Council of Europe has no competence in national defence matters does not mean that its human rights legal framework does not apply to national defence activities. In fact, all values that this Framework Convention purports to protect, i.e. human rights and fundamental freedoms, democracy, and the rule of law, apply equally to cases or situations of national defence. This is evident from the provisions of Article 15 ECHR which allows derogations in times of war. Regarding this paragraph, the drafters did not follow the numerous recommendations from civil society organizations and human rights bodies. Conversely, the EU Commission lobbied for an explicit exclusion of AI systems developed exclusively for national security and defence purposes, aligning the AI Framework Convention with the recent EU AI Act.

Keys to Effective Implementation

The adoption of the Convention is a crucial step towards ensuring that AI technologies are developed and deployed in a manner that respects and upholds human rights. It introduces vital safeguards, including the principles of human dignity, equality and non-discrimination, transparency, accountability, privacy and personal data protection and safe innovation, as outlined in Chapter III. Together, these principles create a framework that seeks to mitigate the risks posed by AI while fostering its benefits. By embedding these safeguards into law, the Convention plays a crucial role in guiding the responsible development and deployment of AI systems, ensuring they enhance rather than undermine human rights.

However, the loopholes in the actual scope of the Convention have the potential to enable conduct that has a negative impact on human rights protection. States Parties will have a wide discretion in deciding whether the Convention applies to private actors, and exemption for research and development activities along with two exemptions based on national security and national defence, leave out two highly relevant fields for the application of AI systems.

State Parties, when deciding to sign and ratify the Convention have the opportunity to opt for a full protection of human rights when AI systems are being used by both public and private authorities.

In particular, to ensure the compliance of all activities within the lifecycle of AI systems with human rights, democracy, and the rule of law, State Parties, in their declarations, should opt for a solution that guarantees the same level of human rights protection in both the public and private sectors. The Convention should equally apply to public and private actors.

States should ensure that the national security interests exception in Article 3.2 is applied only when it can still guarantee that AI system activities are conducted in line with their human rights obligations

It would equally be in line with the original purpose of the Convention for States Parties to apply the Convention as much as possible to research and development activities (Art3.3), as well as in national defence (Art 3.4). 

Out of the first ten signatories so far, only Norway has adopted such declaration in order to ensure that the principles and obligations set forth in Chapters II to VI of the Convention shall apply to activities of private actors. To provide the highest level of human rights protection, the other signatory States should follow this example. 

Print Friendly, PDF & Email
Topics
Artificial Intelligence, EU Law, Europe, Featured, General, Technology
No Comments

Sorry, the comment form is closed at this time.