First Global Treaty to Regulate AI Signed
- By John K. Waters
- 09/09/24
The United States, United Kingdom, European Union, and several other countries have signed the world's first legally binding treaty aimed at regulating the use of artificial intelligence (AI). "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law" was developed by the Council of Europe and opened for signatures on September 5, 2024. The primary goal of the treaty is to ensure that AI systems are designed, developed, deployed, and decommissioned in ways that respect human rights, support democratic institutions, and uphold the rule of law.
"This first-of-a-kind treaty will ensure that the rise of Artificial Intelligence upholds Council of Europe legal standards in human rights, democracy and the rule of law," said Marija Pejčinović Burić, Secretary General of the Council of Europe, in a statement. "Its finalization by our Committee on Artificial Intelligence (CAI) is an extraordinary achievement and should be celebrated as such."
The treaty was created to mitigate risks while promoting responsible innovation by establishing regulations for AI systems and set a global standard for transparency, safety, and accountability in AI use. It was adopted by the Council of Europe on May 17, 2024.
The treaty sets out a number of conditions, including:
- Human-centric AI: AI systems must align with human rights principles and uphold democratic values.
- Transparency and accountability: The treaty requires transparency in how AI systems operate, especially in cases where AI interacts with humans. Governments must also provide legal remedies if AI systems violate human rights.
- Risk management and oversight: It establishes frameworks for managing risks posed by AI and sets up oversight mechanisms to ensure that AI systems comply with safety and ethical standards.
- Protection against misuse: It includes safeguards to prevent AI from undermining democratic processes, like judicial independence and public access to justice.
The treaty applies to all AI systems except those used in national security or defense, though it still requires that these activities respect international laws and democratic principles. The treaty must be ratified by five signatory nations, and it builds on prior AI regulatory efforts, such as the EU AI Act. The treaty has been signed by other nations, including Israel, Norway, and Iceland.
Although the treaty emphasizes preventing AI from undermining democratic institutions, some critics argue that its broad principles may lack enforceability, particularly in areas such as national security, which are exempt from full scrutiny. Nonetheless, this treaty marks a significant step toward global AI governance.
How exactly would the treaty be enforced? It outlines several key enforcement mechanisms:
- Legal Accountability: Countries that sign and ratify the treaty are required to adopt legislative and administrative measures to ensure AI systems comply with the treaty's principles. This includes protecting human rights and promoting transparency and accountability in AI deployment
- Monitoring and Oversight: The treaty introduces oversight mechanisms that monitor the adherence of AI systems to the established standards. However, critics have pointed out that the enforcement mechanism may largely rely on national governments monitoring their AI sectors, which may not always be consistent or effective.
- Remedies for Violations: The treaty mandates that signatories provide legal remedies for individuals harmed by AI-related human rights violations. This could involve procedures for individuals to challenge AI decisions or seek compensation when AI systems cause harm.
- International Cooperation: The treaty encourages collaboration between signatories to harmonize AI standards, share best practices, and address cross-border AI issues. This is crucial as AI technologies often transcend national borders.
- Adaptability: The framework is designed to be technology-neutral, allowing it to evolve as AI systems develop over time. This adaptability is key to maintaining relevant and enforceable standards as AI technologies rapidly change.
Although these mechanisms create a structure for enforcement, their effectiveness remains to be seen, especially when considering the exceptions the treaty provides in areas such as national security.
The treaty was made available for signature at a conference of justice ministers from the Council of Europe, held in Vilnius, the capital of Lithuania, following final approval of the EU's Artificial Intelligence Act by the bloc's ministers just a few months ago, which aimed to regulate the use of AI in "high-risk" sectors.
Read the full text of the treaty here.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].