OpenAI has recently formed a new Security and Safety Committee, composed exclusively of internal members, sparking significant debate. Let’s delve into the details.
Internal Members Dominate OpenAI’s New Security Committee
The newly established Security and Safety Committee at OpenAI includes CEO Sam Altman, board members Bret Taylor, Adam D’Angelo, and Nicole Seligman, Chief Scientist Jakub Pachocki, Readiness Team Lead Aleksander Madry, Head of Security Systems Lilian Weng, Head of Security Matt Knight, and Head of Compliance Science John Schulman. This committee is tasked with evaluating OpenAI’s security processes and measures over the next 90 days. Once the evaluations are complete, findings and recommendations will be presented to the board, and some of these will be made public.
High-Profile Departures and Internal Criticism
In recent months, OpenAI has seen several high-profile departures from its security team, with former employees questioning the company’s commitment to AI safety. Daniel Kokotajlo resigned in April, expressing doubts about OpenAI’s responsible use of increasingly powerful AI. In May, co-founder and Chief Scientist Ilya Sutskever left the company, reportedly due to disagreements with CEO Sam Altman over the neglect of safety efforts in favor of rapidly releasing AI products. More recently, Jan Leike, a former DeepMind researcher involved in developing ChatGPT and InstructGPT, also departed, citing concerns over OpenAI’s handling of security and safety issues. AI policy researcher Gretchen Krueger echoed these concerns, calling for increased accountability and transparency from OpenAI.
Advocacy and Lobbying Efforts
While advocating for AI regulation, OpenAI has also been actively involved in shaping these regulations. The company has dedicated significant resources to lobbying efforts. CEO Sam Altman is set to serve on the newly established AI Security and Safety Board of the U.S. Department of Homeland Security.
External Experts to Balance Criticism
In response to ethical criticisms of the committee’s composition, OpenAI announced plans to hire external experts, including cybersecurity specialist Rob Joyce and former U.S. Department of Justice official John Carlin. However, details about the size of this external group and its influence on the committee remain unclear. Bloomberg columnist Parmy Olson noted that internal oversight boards often lack true auditing capabilities. While OpenAI claims the committee will address valid criticisms, the effectiveness of this approach is still up for debate.
Promises of External Governance
In 2016, CEO Sam Altman stated that external representatives would play a significant role in OpenAI’s governance. However, this plan was never implemented and seems unlikely to materialize now. Despite this, OpenAI aims to use the committee to address valid criticisms, although the extent to which this will happen remains uncertain.
Conclusion
The formation of OpenAI’s new Security and Safety Committee, composed entirely of internal members, has raised questions about the company’s commitment to transparency and external oversight. As OpenAI continues to navigate the complexities of AI safety and regulation, the effectiveness of this new committee and the incorporation of external experts will be closely watched by industry stakeholders and the public.