The Coalition for Secure AI (CoSAI): A New Initiative for Better AI Security
About CoSAI
Tech giants and AI developers have come together to form the Coalition for Secure AI (CoSAI), a new initiative aimed at achieving better security for AI applications. This partnership is led by OASIS Open, a non-profit organization that focuses on open-source projects, especially in the field of cybersecurity.
CoSAI brings together a diverse group of participants, including the big three hyperscalers, AWS, Microsoft, and Google, as well as AI developers such as OpenAI, Anthropic, Cohere, and GenLab. Other participants include well-known tech companies like Nvidia, Intel, IBM, Cisco, PayPal, Wiz, and Chainguard.
The Importance of AI Security
The security of AI applications is becoming increasingly important, and the tech industry wants to take responsibility for this. As AI applications become more widespread, the risk of security breaches and cyber attacks increases. CoSAI aims to address this issue by developing specific tooling and setting up an ecosystem for sharing best practices.
Goals of CoSAI
CoSAI has two main goals. The first goal is to provide companies and organizations with the necessary tooling and technical expertise to secure their AI applications. The second goal is to create an ecosystem where companies can share their best practices and technology for AI-related cybersecurity.
Workstreams
To achieve these goals, CoSAI has launched three open-source workstreams, or projects. The first project focuses on helping software developers scan their ML workloads for security risks. This involves developing a taxonomy of known vulnerabilities and solutions to counter them, as well as a “cybersecurity scorecard” for monitoring AI systems for vulnerabilities and reporting them to stakeholders.
The second project focuses on countering AI security risks. The goal is to identify investments and migration techniques that may have a security impact on AI use. This includes identifying potential vulnerabilities in AI models and developing strategies to mitigate them.
The third project focuses on the risks of supply chain attacks. This includes developing workflows that simplify the process of checking software components for vulnerabilities, particularly those from public repositories and libraries like GitHub.
Instilling Fear of Open-Source
In parallel, CoSAI will also pay close attention to the risks of using third-party AI models to develop solutions and applications. While open-source neural networks can be a cost-effective alternative to building proprietary algorithms, there is a risk that these open-source models may contain vulnerabilities that can be exploited by hackers.
Future Projects
CoSAI plans to start more projects in the future. All initiatives will be overseen by a technical committee of AI experts from academia and the private sector.
Conclusion
In conclusion, the Coalition for Secure AI (CoSAI) is a new initiative that aims to achieve better security for AI applications. By developing specific tooling and setting up an ecosystem for sharing best practices, CoSAI hopes to provide companies and organizations with the necessary expertise to secure their AI applications. With its diverse group of participants, CoSAI is well-positioned to address the growing concern of AI security.
FAQs
- What is CoSAI? CoSAI is a new initiative aimed at achieving better security for AI applications.
- Who are the participants in CoSAI? The participants in CoSAI include tech giants and AI developers, such as AWS, Microsoft, Google, OpenAI, Anthropic, Cohere, and GenLab, as well as well-known tech companies like Nvidia, Intel, IBM, Cisco, PayPal, Wiz, and Chainguard.
- What are the goals of CoSAI? The goals of CoSAI are to provide companies and organizations with the necessary tooling and technical expertise to secure their AI applications, and to create an ecosystem where companies can share their best practices and technology for AI-related cybersecurity.
- What are the workstreams of CoSAI? CoSAI has launched three open-source workstreams, or projects, which focus on scanning ML workloads for security risks, countering AI security risks, and addressing the risks of supply chain attacks.
- What is the future of CoSAI? CoSAI plans to start more projects in the future, which will be overseen by a technical committee of AI experts from academia and the private sector.