Coalition for Secure AI: A Collaborative Effort to Ensure AI Security

Aiming to Democratize AI Security
To address the growing concerns around artificial intelligence (AI) security, major tech companies including Google, IBM, Intel, Microsoft, NVIDIA, and PayPal have joined forces to launch the Coalition for Secure AI (CoSAI). Announced on July 18 at the Aspen Security Forum, this open-source initiative aims to develop standardised practices and tools for creating secure AI systems, potentially reshaping the landscape of AI development and deployment.
The Coalition for Secure AI (CoSAI) emerges as a response to the fragmented landscape of AI security, where developers often struggle with inconsistent guidelines and standards. The coalition, hosted by the OASIS global standards body, purports to tackle the fragmented landscape of AI security where developers grapple with inconsistent guidelines. However, critics might argue that this fragmentation is partly due to the very companies now claiming to solve it.
CoSAI’s Establishment and Goals
“CoSAI’s establishment was rooted in the necessity of democratising the knowledge and advancements essential for the secure integration and deployment of AI,” said David LaBianca, Google’s representative and CoSAI Governing Board co-chair.
The initiative brings together a diverse range of stakeholders, including industry leaders, academics, and experts. In addition to the founding Premier Sponsors, other tech giants such as Amazon, Anthropic, Cisco, OpenAI, and several others have joined as founding Sponsors.
CoSAI’s scope is comprehensive, addressing various aspects of AI security including:
- Securely building, integrating, deploying, and operating AI systems
- Mitigating risks such as model theft, data poisoning, prompt injection, scaled abuse, and inference attacks
- Developing security measures that address both classical and unique risks associated with AI systems
Initial Workstreams
To kickstart its efforts, CoSAI has announced three initial workstreams:
- Software supply chain security for AI systems
- Preparing defenders for a changing cybersecurity landscape
- AI security governance
Industry Backing
The announcement of CoSAI has been met with enthusiasm across the tech industry, with participating companies emphasising the importance of collaboration in addressing AI security challenges.
Paul Vixie, Deputy CISO at Amazon Web Services, stated, “As a sponsor of CoSAI, we’re excited to collaborate with the industry on developing needed standards and practices that will strengthen AI security for everyone.”
Anthropic’s Chief Information Security Officer, Jason Clinton, highlighted the alignment with their company’s mission: “As a safety-focused organisation, building and deploying secure AI models has been core to our mission from the start. We’re proud to partner with other industry leaders to help foster a secure AI ecosystem.”
The Implications of CoSAI
The initiative promises a lot on paper but these lofty goals raise questions: Will the standards truly be inclusive, or will they primarily benefit the big players? Can we trust these companies to self-regulate effectively?
As CoSAI moves forward, it invites contributions from the wider tech community.
Conclusion
The Coalition for Secure AI (CoSAI) aims to democratize AI security by developing standardised practices and tools for creating secure AI systems. With the support of major tech companies, the initiative has the potential to reshape the landscape of AI development and deployment. However, concerns remain about the potential for self-serving interests and the need for independent oversight. As the initiative moves forward, it will be essential to monitor its progress and ensure that the benefits of CoSAI are shared equitably among all stakeholders.
FAQs
What is the Coalition for Secure AI (CoSAI)? CoSAI is an open-source initiative aimed at developing standardised practices and tools for creating secure AI systems.
Who are the founding members of CoSAI? The founding members of CoSAI include Google, IBM, Intel, Microsoft, NVIDIA, and PayPal.
What are the goals of CoSAI? CoSAI aims to develop standardised practices and tools for creating secure AI systems, addressing various aspects of AI security including securely building, integrating, deploying, and operating AI systems, mitigating risks, and developing security measures.
What are the initial workstreams of CoSAI? The initial workstreams of CoSAI include software supply chain security for AI systems, preparing defenders for a changing cybersecurity landscape, and AI security governance.
What is the role of independent oversight in CoSAI? The role of independent oversight in CoSAI is essential to ensure that the benefits of the initiative are shared equitably among all stakeholders and that self-serving interests are not prioritized.








