Industry Group Tackles AI Safety and Security

Tech giants Google, Microsoft, Amazon, OpenAI and others have formed a new industry group aimed at promoting AI safety and security standards.

The Coalition for Secure AI (CoSAI) launched on Thursday as a self-described "open source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems."

"Founding Premier Sponsors" of CoSAI include Microsoft, Nvidia, Google, IBM, Intel, and PayPal. Listed as "additional" founding members are OpenAI, Anthropic, Amazon, Cisco, Cohere, Chainguard, GenLab, and Wiz.

A Technical Steering Committee of AI experts from academia and industry will oversee the group's work.

The primary mission of CoSAI is to "develop comprehensive security measures that address AI systems' classical and unique risks." This is difficult to do in the current AI landscape, the group argues, because existing efforts to establish AI security standards are fragmented, uncoordinated, and inconsistently applied.

Though it recognizes those efforts and plans to collaborate with other groups focused on AI security, CoSAI believes it is uniquely positioned to establish standards that can be widely agreed-upon and adopted due to its diverse and high-profile membership roster.

"As a Founding Member of the Coalition for Secure AI, Microsoft will partner with similarly committed organizations towards creating industry standards for ensuring that AI systems and the machine learning required to develop them are built with security by default and with safe and responsible use and practices in mind," said Microsoft's AI safety chief Yonatan Zunger in a prepared statement. "Through membership and partnership within the Coalition for Secure AI, Microsoft continues its commitment to empower every person and every organization on the planet to do more ... securely."

"From day one, AWS AI infrastructure and the Amazon services built on top of it have had security and privacy features built-in that give customers strong isolation with flexible control over their systems and data," commented Paul Vixie, vice president and Distinguished Engineer at Amazon Web Services. "As a sponsor of CoSAI, we're excited to collaborate with the industry on developing needed standards and practices that will strengthen AI security for everyone."

"Developing and deploying AI technologies that are secure and trustworthy is central to OpenAI's mission," said Nick Hamilton, head of Governance, Risk and Compliance at OpenAI. "We believe that developing robust standards and practices is essential for ensuring the safe and responsible use of AI and we're committed to collaborating across the industry to do so."

Per CoSAI's founding charter, the group intends to find and share mitigations for AI security risks such as "stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, scaled abuse prevention, membership inference attacks, model inversion attacks or gradient inversion attacks to infer private information, and extracting confidential information from the training data."

Interestingly, the group does not consider the following areas to be part of its purview: "misinformation, hallucinations, hateful or abusive content, bias, malware generation, phishing content generation, or other topics in the domain of content safety."

At its outset, CoSAI plans to pursue the following three research areas:

  • AI software supply chain security: The group will explore how to assess the safety of a given AI system based on its provenance. For instance, the group will examine who trained the AI system and how, as well as whether its training process may have left the AI vulnerable to tampering at any point.
  • Security framework development: The group will identify "investments and mitigation strategies" to address the security vulnerabilities in both today's AI systems, as well as future versions.
  • Security and privacy governance: The group will create guidelines to help AI developers and vendors measure risk in their systems.

CoSAI plans to release a paper by the end of this year providing an overview of its research findings.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.