16 Companies Agree to Put Limits on Gen AI Systems

Industry generative AI leaders — including OpenAI, Microsoft, Google, and Anthropic — have agreed to pull the plug on their own AI technologies if they're deemed too dangerous.

The companies are signatories of the "Frontier AI Safety Commitments" document unveiled last week at the AI Seoul Summit. The document, which lays out guidelines for limiting AI misuse, was dubbed a "world first" by the the U.K. government, which co-hosted the summit alongside the Republic of Korea.

The full list of signatories is:

  • Amazon 
  • Anthropic 
  • Cohere 
  • Google/Google DeepMind 
  • G42 
  • IBM 
  • Inflection AI 
  • Meta 
  • Microsoft 
  • Mistral AI 
  • Naver 
  • OpenAI 
  • Samsung Electronics 
  • Technology Innovation Institute 
  • xAI 
  • Zhipu.ai

In the topmost goal of the document, organizations are asked to "effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems."

Many of the signatories already have internal requirements meant to ensure the safety of their AI technologies. OpenAI, for example, unveiled an AI "preparedness framework" last year, though it's still in beta. It also recently formed a new AI Safety and Security Committee, albeit after disbanding its previous AI safety committee.

Microsoft, meanwhile, abides by its Responsible AI Standard developed in 2016. Meta and others are also independently exploring ways to "watermark" content created by their AI systems to limit misinformation, especially in light of this year's elections.

Critically, however, a tenet of this first commitment is that organizations must agree to kill development of AI systems that are beyond saving.

Specifically, they must define "thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable," and "commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds."

The companies are tasked with defining their kill thresholds over the coming months, with the goal of publishing a formal safety framework in time for the AI Action Summit happening February 2025 in France.

The two other goals outlined in the document are:

  • Organisations are accountable for safely developing and deploying their frontier AI models and systems.
  • Organisations' approaches to frontier AI safety are appropriately transparent to external actors, including governments.

The document also lists several AI safety best practices that the signatories pledge to apply, if they haven't already. These include red-teaming, watermarking, incentivizing third-party testing, creating safeguards against insider threats, and more.

Said U.K. Prime Minister Rishi Sunak, "These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI." The pledges laid out in the document are described as "voluntary commitments," and do not carry legal weight.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • laptop screen displaying a typed essay, on a child

    McGraw Hill Acquires Essaypop Digital Learning Tool

    Education company McGraw Hill has announced the acquisition of Essaypop, a cloud-based writing tool that will enhance the former's portfolio of personalized learning capabilities.

  • glowing digital brain made of blue circuitry hovers above multiple stylized clouds of interconnected network nodes against a dark, futuristic background

    Report: 85% of Organizations Are Leveraging AI

    Eighty-five percent of organizations today are utilizing some form of AI, according to the latest State of AI in the Cloud 2025 report from Wiz. While AI's role in innovation and disruption continues to expand, security vulnerabilities and governance challenges remain pressing concerns.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • A child surrounded by glowing, fluid virtual patterns and holographic shapes, illuminated in a dark gradient environment of blue, purple, and pink.

    ClassVR Gets Expanded VR/AR Content Library

    Avantis Education has announced a new content library for its ClassVR virtual and augmented reality platform. Dubbed Eduverse+, the library features four content suites — EduverseAI, WildWorld, STEAM3D, and CareerHub — that can be tailored to suit a variety of educational levels.