Thursday, July 4, 2024
HometechnologyOpenAI Fails to Uphold Promises in Battling "Dangerous AI"

OpenAI Fails to Uphold Promises in Battling “Dangerous AI”

OpenAI, a pioneer in artificial intelligence, has come under scrutiny for failing to fulfill its commitments to manage and mitigate the dangers posed by “super-intelligent” AI systems. The company had previously established a Superalignment team to address these challenges, pledging to allocate 20% of its computing resources to the team’s efforts. However, less than a year later, a series of resignations, including that of team leader Jan Leike, has led to the team’s dissolution. A new report from Fortune reveals that OpenAI never honored its commitment.

OpenAI’s Broken Promises

Multiple sources indicate that OpenAI did not provide the Superalignment team with the promised 20% of computing power. The team reportedly never came close to receiving this level of support, with OpenAI leadership repeatedly denying their requests for additional resources. The resignation of Jan Leike and other team members has been attributed to safety concerns. Leike, who was one of the key figures in the Superalignment initiative founded by OpenAI co-founder Ilya Sutskever, left the company last week, raising further questions about OpenAI’s internal practices.

This situation casts doubt on OpenAI’s public statements regarding AI safety. While the company claims to prioritize the safe development of AI, internal actions suggest otherwise, leading to skepticism about other commitments made by the firm.

Controversy with Scarlett Johansson and ChatGPT’s “Sky” Voice

OpenAI is currently embroiled in a dispute involving Scarlett Johansson and the “Sky” voice used in ChatGPT. OpenAI allegedly sought Johansson’s permission to use her voice, which she declined in September. Despite this, CEO Sam Altman reportedly approached her again before a recent OpenAI event. Johansson now claims OpenAI used her voice without consent, although OpenAI asserts that the voice belongs to an unnamed professional actress.

Silencing Departing Employees?

When OpenAI introduced the Superalignment team last year, it highlighted the existential risks posed by super-intelligent AI, systems hypothesized to surpass all human intelligence combined. The Superalignment team’s mission was to develop solutions to mitigate or prevent these risks. However, the recent report indicates the team never received the necessary 20% computing resources. The vagueness of this “20% computing power” promise and the team’s consistent denial of additional GPU capacity requests underscore the disconnect between OpenAI’s promises and actions.

Following his resignation, Jan Leike’s post on X (formerly Twitter) confirmed these issues, stating that “safety culture and processes took a backseat to flashy products.”

Interestingly, sources who spoke to Fortune did so anonymously, fearing job loss or forfeiture of earned equity. Reportedly, OpenAI’s departing employees were pressured into signing exit agreements with strict non-disparagement clauses. CEO Altman expressed “real shame” upon learning about this clause, asserting that OpenAI had never enforced it and had no intention of reclaiming anyone’s earned equity.

This revelation adds another layer of concern about OpenAI’s internal practices and transparency, challenging the company’s commitment to AI safety and ethical governance.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

RELATED ARTICLES

Most Popular

Recommended News