U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.

Specifically, the BIS is asking for reporting on developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, which involve testing AI models for dangerous capabilities, such as assisting in cyber attacks or enabling the development of weapons by non-experts.

The rule is designed to help the Department of Commerce assess the defense-relevant capabilities of advanced AI systems and ensure they meet stringent safety and reliability standards. This initiative follows a pilot survey conducted earlier this year by BIS and aims to safeguard against potential abuses that could undermine global security, officials said.

"As AI is progressing rapidly, it holds both tremendous promise and risk," said Secretary of Commerce Gina M. Raimondo in a Sept. 9 news release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."

Under a Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

All of these efforts springing forth in such a short time period speak to the urgency of governments, organizations and industry leaders to address AI regulation.

"The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors, all of which are imperative for maintaining national defense and furthering America's technological leadership," the BIS news release said. "With this proposed rule, the United States continues to foster innovation while safeguarding against potential abuses that could undermine global security and stability."

About the Author

David Ramel is an editor and writer for Converge360.

Featured

  • abstract image representing AI tools for reading and writing

    McGraw Hill Introduces 2 Gen AI Tools for K–12, Higher Ed Students

    Global education company McGraw Hill has added two new generative artificial intelligence tools to help personalize learning experiences for both K–12 and higher ed students, according to a news release.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs Off on AI Content Safeguard Laws

    California Governor Gavin Newsom has officially signed a series of landmark artificial intelligence bills into law, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • KnowBe4-MobileMind Integration to Simplify Security Training Management

    It's now easier for MobileMind users to track and manage teachers' progress with KnowBe4's security training campaigns.

  • Human Error Remains the Leading Cause of Cloud Data Breaches

    Human error is still one of the biggest threats to cloud security, despite all the technology bells and whistles and alerts and services out there, from multi-factor authentication, to social engineering training, to enterprise-wide integrated cybersecurity platforms, and more.