NIST's U.S. AI Safety Institute to Collaborate with Anthropic and OpenAI on AI Research

The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), is partnering with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.

Under the Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

"Safety is essential to fueling breakthrough technological innovation," said Elizabeth Kelly, director of the U.S. AI Safety Institute, in a statement. "With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety."

"These agreements are just the start," she added, "but they are an important milestone as we work to help responsibly steward the future of AI."

The U.S. AI Safety Institute also intends to work closely with its partners at the U.K. AI Safety Institute to offer feedback to Anthropic and OpenAI on potential safety enhancements to their models.

"Safe, trustworthy AI is crucial for the technology's positive impact," said Anthropic co-founder and head of policy Jack Clark, in a statement. "Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment."

The agreements come at a time of increasing regulatory scrutiny over the safe and ethical use of AI technologies. California legislators are also poised to vote on a bill regulating AI development and deployment.

The initiative builds on NIST’s longstanding legacy in advancing measurement science and standards, with the aim of fostering the safe, secure, and trustworthy development and use of AI, as outlined in the Biden-Harris administration’s Executive Order on AI.

"We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence," said OpenAI chief strategy officer Jason Kwon, "and hope that our work together offers a framework that the rest of the world can build on."

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Abstract geometric pattern with interconnected nodes and lines

    Microsoft 365 Copilot Updates Offer Expanded AI Capabilities, Collaboration Tools

    Microsoft has announced updates to its Microsoft 365 Copilot AI assistant, including expanded AI capabilities in individual apps, the ability to create autonomous agents, and a new AI-powered collaboration workspace.

  • An open book with text transforming into smooth lines represents reading ease

    Fluency Innovator Grants to Award Free Subscriptions to WordFlight Literacy Intervention Solution

    The call for applications is now open for Foundations in Learning's Fall 2024 Fluency Innovator Grants program. Teachers and administrators from schools and districts serving grades 3-8 may apply to receive a free subscription to WordFlight, a literacy assessment and intervention solution for students with deficits in reading fluency and comprehension, for the Fall 2024 semester.

  • AI-themed background with sparse circuit lines and minimal geometric shapes

    Microsoft to Introduce AI Agent Building Tools in Copilot Studio

    In November, Microsoft plans to roll out a public preview of a new feature within Copilot Studio, allowing users to create autonomous AI "agents" designed to handle routine tasks.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.