On Monday, OpenAI introduced the formation of a brand new “Security and Safety Committee” to supervise danger administration for its initiatives and operations. The announcement comes as the corporate says it has “not too long ago begun” coaching its subsequent frontier mannequin, which it expects to convey the corporate nearer to its objective of reaching synthetic basic intelligence (AGI), although some critics say AGI is farther off than we would suppose. It additionally comes as a response to a horrible two weeks within the press for the corporate.
Whether or not the aforementioned new frontier mannequin is meant to be GPT-5 or a step past that’s presently unknown. Within the AI trade, “frontier mannequin” is a time period for a brand new AI system designed to push the boundaries of present capabilities. And “AGI” refers to a hypothetical AI system with human-level skills to carry out novel, basic duties past its coaching information (not like slender AI, which is skilled for particular duties).
In the meantime, the brand new Security and Safety Committee, led by OpenAI administrators Bret Taylor (chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO), might be liable for making suggestions about AI security to the total firm board of administrators. On this case, “security” partially means the standard “we cannot let the AI go rogue and take over the world,” however it additionally features a broader set of “processes and safeguards” that the corporate spelled out in a Could 21 security replace associated to alignment analysis, defending kids, upholding election integrity, assessing societal impacts, and implementing safety measures.
OpenAI says the committee’s first job might be to guage and additional develop these processes and safeguards over the subsequent 90 days. On the finish of this era, the committee will share its suggestions with the total board, and OpenAI will publicly share an replace on adopted suggestions.
OpenAI says that a number of technical and coverage specialists, together with Aleksander Madry (head of preparedness), Lilian Weng (head of security programs), John Schulman (head of alignment science), Matt Knight (head of safety), and Jakub Pachocki (chief scientist), can even serve on its new committee.
The announcement is notable in just a few methods. First, it is a response to the unfavorable press that got here from OpenAI Superalignment crew members Ilya Sutskever and Jan Leike resigning two weeks in the past. That crew was tasked with “steer[ing] and management[ling] AI programs a lot smarter than us,” and their departure has led to criticism from some inside the AI neighborhood (and Leike himself) that OpenAI lacks a dedication to growing extremely succesful AI safely. Different critics, like Meta Chief AI Scientist Yann LeCun, suppose the corporate is nowhere close to growing AGI, so the priority over an absence of security for superintelligent AI could also be overblown.
Second, there have been persistent rumors that progress in giant language fashions (LLMs) has plateaued not too long ago round capabilities just like GPT-4. Two main competing fashions, Anthropic’s Claude Opus and Google’s Gemini 1.5 Professional, are roughly equal to the GPT-4 household in functionality regardless of each aggressive incentive to surpass it. And not too long ago, when many anticipated OpenAI to launch a brand new AI mannequin that will clearly surpass GPT-4 Turbo, it as an alternative launched GPT-4o, which is roughly equal in skill however quicker. Throughout that launch, the corporate relied on a flashy new conversational interface quite than a serious under-the-hood improve.
We have beforehand reported on a rumor of GPT-5 coming this summer time, however with this current announcement, it appears the rumors might have been referring to GPT-4o as an alternative. It is fairly attainable that OpenAI is nowhere close to releasing a mannequin that may considerably surpass GPT-4. However with the corporate quiet on the main points, we’ll have to attend and see.