AISI and UK AI Safety Institute Collaborate on AI Safety Research
OpenAI and Anthropic Sign Agreements with US Government
OpenAI and Anthropic have signed agreements with the U.S. government, offering their frontier AI models for testing and safety research. According to an announcement from NIST, the U.S. AI Safety Institute will gain access to the technologies “prior to and following their public release.”
Thanks to the respective Memorandum of Understandings (MOUs) signed by the two AI giants, the AISI can evaluate their models’ capabilities and identify and mitigate any safety risks.
About the U.S. AI Safety Institute
The AISI, formally established by NIST in February 2024, focuses on the priority actions outlined in the AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in October 2023. These actions include developing standards for the safety and security of AI systems. The group is supported by the AI Safety Institute Consortium, whose members consist of Meta, OpenAI, NVIDIA, Google, Amazon, and Microsoft.
Quotes from Key Figures
Elizabeth Kelly, director of the AISI, said in the press release: “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”
Jack Clark, co-founder and head of Policy at Anthropic, told TechRepublic via email: “Safe, trustworthy AI is crucial for the technology’s positive impact. Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment.”
Jason Kwon, Chief Strategy Officer at OpenAI, told TechRepublic via email: “We strongly support the U.S. AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models.”
AISI to Work with UK AI Safety Institute
The AISI also plans to collaborate with the UK AI Safety Institute when providing safety-related feedback to OpenAI and Anthropic. In April, the two countries formally agreed to work together in developing safety tests for AI models.
This agreement was taken to uphold the commitments established at the first global AI Safety Summit last November, where governments from around the world accepted their role in safety testing the next generation of AI models.
California’s AI Act
Somewhat at odds with the national perspective on AI regulation, California State Assembly passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB-1047 or California’s AI Act. The following day, it was approved by the state Senate, and now only has to be approved by Gov. Gavin Newsom before it is enacted into law.
Silicon Valley stalwarts OpenAI, Meta, and Google have all penned letters to California lawmakers expressing their concerns about SB-1047, emphasizing the need for a more cautious approach to avoid hindering the growth of AI technologies.
Conclusion
The agreements between OpenAI, Anthropic, and the U.S. AI Safety Institute mark an important milestone in the development of AI safety research. The collaboration between the two AI giants and the U.S. government will help to advance the science of AI safety and ensure that AI technologies are developed and used in a responsible and trustworthy manner.
FAQs
Q: What is the U.S. AI Safety Institute?
A: The U.S. AI Safety Institute is a group established by NIST in February 2024 to focus on the priority actions outlined in the AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in October 2023.
Q: What is the purpose of the agreements between OpenAI, Anthropic, and the U.S. AI Safety Institute?
A: The agreements will allow the U.S. AI Safety Institute to evaluate the capabilities of OpenAI and Anthropic’s frontier AI models and identify and mitigate any safety risks.
Q: What is the significance of the collaboration between the U.S. AI Safety Institute and the UK AI Safety Institute?
A: The collaboration will allow the two institutes to work together in developing safety tests for AI models and provide safety-related feedback to OpenAI and Anthropic.
Q: What is California’s AI Act?
A: California’s AI Act, also known as SB-1047, is a bill that aims to regulate the development and use of AI technologies in the state. The bill has been criticized by Silicon Valley stalwarts OpenAI, Meta, and Google, who argue that it could hinder the growth of AI technologies.









