Industry Group Tackles AI Safety and Security
Cybersecurity
Industry Group Tackles AI Safety and Security
Tech giants Google, Microsoft, Amazon, OpenAI, and others have formed a new industry group aimed at promoting AI safety and security standards.
The Coalition for Secure AI (CoSAI) launched on Thursday as a self-described “open source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by-Design AI systems.”
“Founding Premier Sponsors” of CoSAI include Microsoft, Nvidia, Google, IBM, Intel, and PayPal. Listed as “additional” founding members are OpenAI, Anthropic, Amazon, Cisco, Cohere, Chainguard, GenLab, and Wiz.
A Technical Steering Committee of AI experts from academia and industry will oversee the group’s work. The primary mission of CoSAI is to “develop comprehensive security measures that address AI systems’ classical and unique risks.”
This is difficult to do in the current AI landscape, the group argues, because existing efforts to establish AI security standards are fragmented, uncoordinated, and inconsistently applied.
Though it recognizes those efforts and plans to collaborate with other groups focused on AI security, CoSAI believes it is uniquely positioned to establish standards that can be widely agreed-upon and adopted due to its diverse and high-profile membership roster.
“Developing and deploying AI technologies that are secure and trustworthy is central to OpenAI’s mission,” said Nick Hamilton, head of Governance, Risk and Compliance at OpenAI. “We believe that developing robust standards and practices is essential for ensuring the safe and responsible use of AI and we’re committed to collaborating across the industry to do so.”
“As a Founding Member of the Coalition for Secure AI, Microsoft will partner with similarly committed organizations towards creating industry standards for ensuring that AI systems and the machine learning required to develop them are built with security by default and with safe and responsible use and practices in mind,” said Microsoft’s AI safety chief Yonatan Zunger in a prepared statement.
The group does not consider the following areas to be part of its purview: “misinformation, hallucinations, hateful or abusive content, bias, malware generation, phishing content generation, or other topics in the domain of content safety.”
- AI software supply chain security: The group will explore how to assess the safety of a given AI system based on its provenance.
- Security framework development: The group will identify “investments and mitigation strategies” to address the security vulnerabilities in both today’s AI systems, as well as future versions.
- Security and privacy governance: The group will create guidelines to help AI developers and vendors measure risk in their systems.
CoSAI plans to release a paper by the end of this year providing an overview of its research findings.
Frequently Asked Questions
Q: What is the mission of the Coalition for Secure AI (CoSAI)? CoSAI’s primary mission is to “develop comprehensive security measures that address AI systems’ classical and unique risks.”
Q: Which organizations have joined CoSAI? “Founding Premier Sponsors” include Microsoft, Nvidia, Google, IBM, Intel, and PayPal. Listed as “additional” founding members are OpenAI, Anthropic, Amazon, Cisco, Cohere, Chainguard, GenLab, and Wiz.
Q: How will CoSAI achieve its goals? A Technical Steering Committee of AI experts from academia and industry will oversee the group’s work.
Q: What areas is CoSAI not focused on? CoSAI does not consider areas like misinformation, hallucinations, hateful or abusive content, bias, malware generation, phishing content generation, or other topics in the domain of content safety to be within its purview.
Conclusion
CoSAI has made significant progress in tackling AI safety and security concerns, assembling a diverse and high-profile group of member organizations. With a strong commitment to developing robust standards and practices, the Coalition aims to establish guidelines that will help ensure the safe and responsible use of AI technology.









