Senators demand OpenAI release data showing it keeps its AI safe

Senators Demand Transparency from OpenAI on AI Safety and Security

Letter Requests Data on Employee Agreements and Safety Testing

Senators demanded in a Monday letter that OpenAI turn over data about its efforts to build safe and secure artificial intelligence, following employee warnings that the company rushed through safety-testing of its latest AI model, which were detailed in The Washington Post earlier this month.

Led by Sen. Brian Schatz (D-Hawaii), the five lawmakers asked OpenAI chief executive Sam Altman to outline how the maker of ChatGPT plans to meet “public commitments” to ensure its AI does not cause harm, such as teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks.

Employee Concerns and Whistleblower Allegations

The senators also asked the company for information about employee agreements, which could have muzzled workers who wished to alert regulators to risks. In a July letter to the Securities and Exchange Commission, OpenAI whistleblowers said they had filed a complaint with the agency alleging the company illegally issued restrictive severance, nondisclosure, and employee agreements, potentially penalizing workers who wished to raise concerns to federal regulators.

OpenAI’s Response

In a statement to The Post earlier this month, OpenAI spokesperson Hannah Wong said the company has “made important changes to our departure process to remove nondisparagement terms” from staff agreements.

“Artificial intelligence is a transformative new technology and we appreciate the importance it holds for U.S. competitiveness and national security,” OpenAI spokesperson Liz Bourgeois wrote. “We take our role in developing safe and secure AI very seriously and continue to work alongside policymakers to establish the appropriate safeguards going forward.”

Letter Requests and Demands

The senators asked OpenAI to commit to not enforcing nondisparagement agreements and “removing any other provisions” from employee agreements that could be used to punish those who raise concerns about company practices.

The letter also asked OpenAI to fulfill the requests by Aug. 13, including documentation on how it plans to meet its voluntary pledge to the Biden administration to protect the public from abuses of generative AI.

Whistleblower Concerns

Stephen Kohn, a lawyer representing OpenAI whistleblowers, said the senators’ requests are “not sufficient” to cure the chilling effect of preventing employees from speaking about company practices. “What steps are they taking to cure that cultural message,” he said, “to make OpenAI an organization that welcomes oversight?”

Kohn added that Congress must hold hearings and an investigation into OpenAI’s practices.

Conclusion

The letter from the senators to OpenAI highlights the need for transparency and accountability in the development and deployment of artificial intelligence. As AI technology continues to evolve and improve, it is essential that companies like OpenAI prioritize safety and security, and that regulators and policymakers hold them accountable for their actions.

FAQs

Q: What is the purpose of the letter from the senators to OpenAI?

A: The letter is requesting information from OpenAI about its efforts to build safe and secure artificial intelligence, as well as its employee agreements and safety testing practices.

Q: What are the concerns of the senators regarding OpenAI’s employee agreements?

A: The senators are concerned that OpenAI’s employee agreements may be preventing workers from speaking out about company practices and raising concerns to regulators.

Q: What is the significance of the OpenAI whistleblowers’ allegations?

A: The allegations suggest that OpenAI may have illegally issued restrictive agreements to employees, potentially penalizing those who wish to raise concerns about company practices.

Q: What is the timeline for OpenAI’s response to the senators’ letter?

A: OpenAI is required to respond to the senators’ letter by Aug. 13, including documentation on how it plans to meet its voluntary pledge to the Biden administration to protect the public from abuses of generative AI.

Q: What is the significance of the letter for the development of AI technology?

A: The letter highlights the need for transparency and accountability in the development and deployment of AI technology, and underscores the importance of prioritizing safety and security in the development of AI systems.

Related Stories

“Ransomware, was ist das?”

“Ransomware, was ist das?”

Rewrite the width="5175" height="2910" sizes="(max-width: 5175px) 100vw, 5175px">Gefahr nicht erkannt, Gefahr nicht gebannt.Leremy – shutterstock.com KI-Anbieter Cohesity hat 1.000 Mitarbeitende...

Leave a Reply

Your email address will not be published. Required fields are marked *

Recommended

Our Newsletter

Join TOKENS for a quick weekly digest of the best in crypto news, projects, posts, and videos for crypto knowledge and wisdom.