OpenAI and Anthropic Agree to Let U.S. AI Safety Institute Test Models.
OpenAI and Anthropic, two of the highest-valued startups in the artificial intelligence sector, have reached an agreement with the U.S. AI Safety Institute to allow the institute to evaluate their upcoming AI models before they are publicly launched. This decision comes amid growing concerns about safety and ethical considerations in the AI field.
The U.S. AI Safety Institute, which operates under the Department of Commerce and the Institute of Standards and Technology (NIST), announced that it will gain access to significant new models from both OpenAI and Anthropic before and after they are made public. This initiative was launched following the Biden-Harris administration’s issuance of the U.S. government’s inaugural order on AI, which mandates new assessments of safety, guidance on equity and civil rights, and research into AI’s effects on the job market.
OpenAI’s CEO, Sam Altman, expressed his satisfaction with the collaboration in a post on X, stating, “We are pleased to have reached an agreement with the U.S. AI Safety Institute for the pre-release testing of our upcoming models.” Additionally, OpenAI confirmed to CNBC that it has seen a significant increase in its active user base, which has doubled to 200 million over the past year.
This development follows reports that OpenAI is negotiating a funding round that could value the company at over $100 billion. Thrive Capital is expected to lead this round with an investment of $1 billion, according to a source who requested anonymity due to the confidential nature of the information.
Anthropic, which was founded by former executives and employees of OpenAI, was recently valued at $18.4 billion and counts Amazon among its primary investors, while OpenAI receives substantial backing from Microsoft. The agreements established between the government, OpenAI, and Anthropic aim to facilitate joint research to assess capabilities and safety risks, as well as strategies for risk mitigation, according to the announcement released on Thursday.
Kwon, OpenAI’s chief strategy officer, remarked in a statement to CNBC, “We are fully supportive of the U.S. AI Safety Institute’s objectives and anticipate collaborating to enhance safety practices and standards for AI models.” Jack Clark, co-founder of Anthropic, noted that the partnership with the U.S. AI Safety Institute utilizes their extensive expertise to rigorously evaluate their models before they are widely adopted, thereby enhancing their capacity to identify and address risks and promoting responsible AI development.
Concerns regarding safety and ethical practices have been raised by numerous AI developers and researchers as the industry shifts towards profit-driven motives. A letter signed by current and former OpenAI employees, published on June 4, highlighted potential issues stemming from the rapid progress in AI technology and the insufficient oversight and protection for whistleblowers.
The letter stated, “AI companies have significant financial motivations to evade effective oversight, and we do not believe that tailored corporate governance structures are adequate to address this.” The authors also pointed out that AI firms currently have limited obligations to disclose information to governments and none to civil society, making it unlikely they would share such information voluntarily.
Shortly after the letter’s release, a source informed CNBC that the Federal Trade Commission (FTC) and the Department of Justice were preparing to initiate antitrust investigations into OpenAI, Microsoft, and Nvidia. FTC Chair Lina Khan has characterized this as a “inquiry into investments and being formed AI developers and major cloud service providers.”
On Wednesday, California lawmakers approved a contentious AI safety bill, which is now awaiting Governor Gavin Newsom’s decision. The Democratic governor has until September 30 to either veto the legislation or enact it into law. The bill, which seeks to mandate safety testing and other protective measures for AI models meeting certain cost or computational criteria, has faced opposition from some technology companies that argue it could hinder innovation.
source:https://www.cnbc.com/2024/08/29/openai-and-anthropic-agree-to-let-us-ai-safety-institute-test-models.html