29-8-2024 (WASHINGTON) In a groundbreaking move that signals a new era of collaboration between the public sector and artificial intelligence pioneers, the United States Artificial Intelligence Safety Institute has announced the signing of unprecedented research agreements with AI startups OpenAI and Anthropic. The deals, revealed on Thursday, are set to pave the way for extensive research, testing, and evaluation of cutting-edge AI models.
These first-of-their-kind partnerships emerge against a backdrop of increasing regulatory scrutiny surrounding the safe and ethical deployment of AI technologies. The timing is particularly significant, as California legislators are poised to vote on a comprehensive bill that could reshape how AI is developed and implemented within the state.
Under the terms of these agreements, the US AI Safety Institute will be granted privileged access to major new AI models from both OpenAI and Anthropic, both before and after their public release. This unprecedented level of access will facilitate collaborative research efforts aimed at evaluating the capabilities and potential risks associated with these advanced AI systems.
Jason Kwon, Chief Strategy Officer at OpenAI, the company behind the widely-known ChatGPT, expressed enthusiasm about the partnership, stating, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”
While Anthropic, which boasts backing from tech giants Amazon and Alphabet, has yet to comment on the agreement, the implications of their involvement are significant for the AI industry as a whole.
Elizabeth Kelly, Director of the US AI Safety Institute, underscored the importance of these agreements, saying, “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
The institute, which operates under the aegis of the US Commerce Department’s National Institute of Standards and Technology (NIST), will not limit its collaboration to domestic entities. Plans are in place for the US AI Safety Institute to work closely with its UK counterpart, the UK AI Safety Institute, fostering an international approach to AI safety and development.
`