
Wed Sep 18 02:12:29 UTC 2024: ## OpenAI Restructures Safety Committee, Altman Steps Back
**SAN FRANCISCO** – OpenAI, the leading artificial intelligence (AI) research company, announced significant changes to its internal Safety and Security Committee, including the departure of CEO Sam Altman from the co-director position.
The committee, established in May to oversee crucial safety and security decisions related to AI model development and deployment, will now operate as an “independent board oversight committee” led by Zico Kolter, Director of Carnegie Mellon University’s machine learning department.
Kolter replaces Bret Taylor, who has also stepped down from the role. Remaining members of the committee include:
* Adam D’Angelo, Quora co-founder and CEO
* Retired US Army General Paul Nakasone
* Nicole Seligman, former EVP and general counsel at Sony Corporation
This restructuring comes as OpenAI faces increasing scrutiny over the safety of its technology, particularly in the wake of reports alleging the company used illegal non-disclosure agreements and required employees to reveal contact with authorities to suppress security concerns.
OpenAI’s recent release of its advanced language models, OpenAI o1, further highlighted the importance of robust safety and security protocols. The committee, under Kolter’s leadership, reviewed the safety criteria used to assess OpenAI o1’s “fitness” for launch.
Moving forward, the committee will focus on:
* Establishing independent governance for AI safety and security
* Enhancing security measures
* Fostering transparency about OpenAI’s work
* Collaborating with external organizations
* Unifying safety frameworks for model development and monitoring
While Altman’s departure from the committee remains a significant move, its implications for AI governance are yet to be fully understood.
Abhishek Sengupta, practice director at Everest Group, believes the restructuring signals OpenAI’s recognition of “the importance of neutrality in AI governance efforts,” and could lead to greater transparency in managing AI security and safety risks.
“While the need to innovate fast has strained governance for AI, increasing government scrutiny and the risk of public blowback is gradually bringing it back into focus,” Sengupta said. “It is likely that we will increasingly see independent third parties involved in AI governance and audit.”