AI Regulation: Industry Groups vs. Government Oversight
Industry Groups and AI Regulation
With the rapid development of generative AI, it appears that every other week, major players in the industry are establishing new agreements and forums to police or give the impression of oversight within AI development.
This is a good thing, as it establishes collaborative discussion around AI projects and what each company should be monitoring and managing within the process. However, it also feels like these are a means to stave off further regulatory restrictions, which could increase transparency and impose more rules on what developers can and can’t do with their projects.
The Coalition for Secure AI (CoSAI)
Google is the latest to come up with a new AI guidance group, forming the Coalition for Secure AI (CoSAI), which is designed to “advance comprehensive security measures for addressing the unique risks that come with AI.”
According to Google, “AI needs a security framework and applied standards that can keep pace with its rapid growth. That’s why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others – and above all, a forum to make that happen.”
This is not a whole new initiative, but an expansion of a previously announced one, focused on AI security development and guiding defense efforts to help avoid hacks and data breaches.
Other Industry Groups
A range of big tech players have signed up to the new initiative, including Amazon, IBM, Microsoft, NVIDIA, and OpenAI, with the intended goal of creating collaborative, open-source solutions to ensure greater security in AI development.
This is not the only industry group focused on sustainable and secure AI development. For example:
- The Frontier Model Forum (FMF) is aiming to establish industry standards and regulations around AI development. Meta, Amazon, Google, Microsoft, and OpenAI have signed up to this initiative.
- Thorn has established its “Safety by Design” program, which is focused on responsibly sourced AI training datasets, in order to safeguard them from child sexual abuse material. Meta, Google, Amazon, Microsoft, and OpenAI have all signed up to this initiative.
- The U.S. Government has established its AI Safety Institute Consortium (AISIC), which more than 200 companies and organizations have joined.
- Representatives from almost every major tech company have agreed to the Tech Accord to Combat Deceptive Use of AI, which aims to implement “reasonable precautions” in preventing AI tools from being used to disrupt democratic elections.
Government Regulation
EU officials are already measuring the potential harms of AI development and what’s covered, or not, under the GDPR, while other regions are also weighing the same, with the threat of actual financial penalties behind their government-agreed parameters.
It feels like this is what’s actually required, but at the same time, government regulation takes time, and it’s likely that we’re not going to see actual enforcement systems and structures around this in place until after the fact.
Once we see the harms, then it’s much more tangible, and regulatory groups will have more impetus to push through official policies. But until then, we have industry groups, which see each company taking pledges to play by these established rules, implemented via mutual agreement.
Conclusion
Essentially, we’re seeing a growing number of forums and agreements designed to address various elements of safe AI development. While this is a good thing, it’s not clear whether these will be enough to ensure the safety and security of AI development.
Government regulation is likely to be the most effective way to ensure the safety and security of AI development, but it’s unclear when this will happen. In the meantime, industry groups and agreements may be the best we have.
FAQs
Q: What is the Coalition for Secure AI (CoSAI)?
A: The Coalition for Secure AI (CoSAI) is a new AI guidance group formed by Google, designed to advance comprehensive security measures for addressing the unique risks that come with AI.
Q: What is the purpose of CoSAI?
A: The purpose of CoSAI is to establish a security framework and applied standards that can keep pace with the rapid growth of AI, with the goal of creating collaborative, open-source solutions to ensure greater security in AI development.
Q: Which companies have signed up to CoSAI?
A: Amazon, IBM, Microsoft, NVIDIA, and OpenAI have signed up to CoSAI, with the intention of creating collaborative, open-source solutions to ensure greater security in AI development.
Q: Are there other industry groups focused on sustainable and secure AI development?
A: Yes, there are other industry groups focused on sustainable and secure AI development, including the Frontier Model Forum (FMF), Thorn’s “Safety by Design” program, the U.S. Government’s AI Safety Institute Consortium (AISIC), and the Tech Accord to Combat Deceptive Use of AI.
Q: What is the significance of government regulation in AI development?
A: Government regulation is likely to be the most effective way to ensure the safety and security of AI development, as it can impose actual financial penalties and enforcement systems and structures around AI development.
Q: Will industry groups and agreements be enough to ensure the safety and security of AI development?
A: It’s unclear whether industry groups and agreements will be enough to ensure the safety and security of AI development, as they are voluntary and may not be enforceable. Government regulation may be necessary to ensure the safety and security of AI development.