U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.

Specifically, the BIS is asking for reporting on developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, which involve testing AI models for dangerous capabilities, such as assisting in cyber attacks or enabling the development of weapons by non-experts.

The rule is designed to help the Department of Commerce assess the defense-relevant capabilities of advanced AI systems and ensure they meet stringent safety and reliability standards. This initiative follows a pilot survey conducted earlier this year by BIS and aims to safeguard against potential abuses that could undermine global security, officials said.

"As AI is progressing rapidly, it holds both tremendous promise and risk," said Secretary of Commerce Gina M. Raimondo in a Sept. 9 news release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."

Under a Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

All of these efforts springing forth in such a short time period speak to the urgency of governments, organizations and industry leaders to address AI regulation.

"The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors, all of which are imperative for maintaining national defense and furthering America's technological leadership," the BIS news release said. "With this proposed rule, the United States continues to foster innovation while safeguarding against potential abuses that could undermine global security and stability."

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • student using a tablet with math symbols dissolving into a glowing AI

    Survey: Students Say AI Use Can Reduce Math Anxiety

    In a recent survey, 56% of high school students said that the use of artificial intelligence can go a long way toward reducing math anxiety.

  • robotic elements such as a mechanical arm, AI brain, microchip, and wheeled robot in a muted blue color scheme

    California District to Build New Robotics Facility for Student Creativity and Collaboration

    California's Fremont Union High School District recently announced that construction has begun on a new Robotics Facility on the campus of Cupertino High School. The 14,500-square-foot facility will serve students at high schools across the entire district, providing purpose-built spaces for student creativity and collaboration.

  • teenager interacts with a chatbot on a computer screen

    Character.AI Rolls Out New Parental Insights Feature Amid Safety Concerns

    Chatbot platform Character.AI has introduced a new Parental Insights feature aimed at giving parents a window into their children's activity on the platform. The feature allows users under 18 to share a weekly report of their chatbot interactions directly with a parent's e-mail address.