NIST Introduces National Generative AI Testing Program

The National Institute of Standards and Technology (NIST) is moving toward establishing a more standardized national approach to AI safety. The government agency has announced the launch of NIST GenAI, described as an "evaluation program to support research in Generative AI technologies."

The launch comes six months after the Biden White House signed an Executive Order requiring LLM makers to implement guardrails around AI technologies that protect the privacy and security of consumer data. For instance, the order mandated the development of "standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy," and of "standards and best practices for detecting AI-generated content and authenticating official content."

The NIST GenAI program is part of the department's effort to address those mandates.

A companion NIST program, dubbed Aria, is set to launch soon. Aria's stated goal is "to advance measurement science for safe and trustworthy AI."

In a press release Monday, the U.S. Department of Commerce, of which NIST is part, described the GenAI program as a platform to "evaluate and measure generative AI technologies."

"The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies," said the agency. "These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content."

The first of these challenges aims to evaluate the efficacy of text-to-text (T2T) AI models -- those that generate human-like text ("generators"), as well as those that purport to detect AI-generated text ("discriminators"). Findings from the challenge will help guide the NIST's eventual recommendations to LLM makers for how to convey the provenance of content made using their AI systems. This is how NIST describes the challenge in its Overview page:

NIST GenAI T2T is an evaluation series that supports research in Generative AI Text-to-Text modality. Which generative AI models are capable of producing synthetic content that can deceive the best discriminators as well as humans? The performance of generative AI models can be measured by (a) humans and (b) discriminative AI models. To evaluate the "best" generative AI models, we need the most competent humans and discriminators. The most proficient discriminators are those that possess the highest accuracy in detecting the "best" generative AI models. Therefore, it is crucial to evaluate both generative AI models (generators) and discriminative AI models (discriminators).

The challenge is open to academics, researchers and LLM makers; those interested can read the participation guidelines here. A similar challenge to evaluate text-to-image models is set to start soon.

Besides the GenAI program launch, NIST this week released preliminary versions of four papers about the secure development and implementation of AI. These papers, which are described as "initial drafts," are as follows:

Each draft is still subject to change based on public input. The NIST is accepting feedback for each publication until June 2, and plans to publish final versions published "later this year."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • stylized illustration of a desktop, laptop, tablet, and smartphone all displaying an orange AI icon

    Survey: AI Shifting from Cloud to PCs

    A recent Intel-commissioned report identifies a significant shift in AI adoption, moving away from the cloud and closer to the user. Businesses are increasingly turning to the specialized hardware of AI PCs, the survey found, recognizing their potential not just for productivity gains, but for revolutionizing IT efficiency, fortifying data security, and delivering a compelling return on investment by bringing AI capabilities directly to the edge.

  • handshake between two individuals with AI icons (brain, chip, network, robot) in the background

    Microsoft, Amazon Announce New Commitments in Support of Presidential AI Challenge

    At the Sept. 4 meeting of the White House Task Force on Artificial Intelligence Education, Microsoft and Amazon announced new commitments to expanding AI education and skills training.

  • digital learning resources including a document, video tutorial, quiz checklist, pie chart, and AI cloud icon

    Quizizz Rebrands as Wayground, Announces New AI Features

    Learning platform Quizizz has become Wayground, in a rebranding meant to reflect "the platform's evolution from a quiz tool into a more versatile supplemental learning platform that's supported by AI," according to a news announcement.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.