Categories: Technology

Sam Altman: Next OpenAI model will first undergo safety checks by U.S. Government

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Amid growing concerns over the safety of advanced intelligence systems, OpenAI CEO Sam Altman has said that the company’s next major generative AI model will first go to the U.S. government for safety checks.

In a post on X, Altman noted that the company has been working with the U.S. AI Safety Institute — a federal government body — on an agreement to provide early access to its next foundation model and collaborate to push forward the science of AI evaluations. 

The OpenAI boss also emphasized that the company has changed its non-disparagement policies, allowing current and former employees to raise concerns about the company and its work freely, and it remains committed to allocating at least 20% of its compute resources to safety research.

Letter from U.S. senators questioned OpenAI

OpenAI has become a go-to name in the AI industry, thanks to the prowess of ChatGPT and the entire family of foundation models the company has developed. The Altman-led lab has aggressively pushed new and very capable products (they just challenged Google with SearchGPT) but the fast-paced approach has also drawn criticism, with many, including its own former safety co-leads, claiming that it is ignoring the safety aspect of advanced AI research.

In light of the concerns, five U.S. senators recently wrote to Altman questioning OpenAI’s commitment to safety as well as cases of possible retribution against former employees who publicly raised their concerns — under the non-disparagement clause in its employment contract.

“OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns,” the senators wrote.

According to Bloomberg, OpenAI’s chief strategy officer Jason Kwon recently responded with a letter reaffirming the company’s commitment to developing artificial intelligence that benefits all humanity. He also said that the lab is dedicated to “implementing rigorous safety protocols” at every stage of the process. 

Among the steps being taken, he mentioned OpenAI’s plan to allocate 20% of its computing resources to safety research (first announced last July), the move to cancel the non-disparagement clause in the employment agreements of current and former employees to ensure they can comfortably raise concerns and the partnership with the AI Safety Institute to collaborate on safe model releases.

Altman later reiterated the same on X, although without sharing too many details, especially on the work going on with the AI Safety Institute.

The government body, housed within the National Institute of Standards and Technology (NIST), was announced last year at the U.K. AI Safety Summit with a mission to address risks associated with advanced AI, including those related to national security, public safety, and individual rights. To achieve this, it is working with a consortium of more than 100 tech industry companies, including Meta, Apple, Amazon, Google and, of course, OpenAI.

However, it is important to note that the U.S. government is not the only one getting early access. OpenAI also has a similar agreement with the U.K. government for the safety screening of its models.

Safety concerns started growing in May

The safety concerns for OpenAI started ballooning earlier in May when Ilya Sutskever and Jan Leike, the two co-leaders of OpenAI’s superalignment team working to build safety systems and processes to control superintelligent AI models, resigned within a matter of hours. 

Leike, in particular, was vocal about his departure and noted that the company’s “safety culture and processes have taken a backseat to shiny products.” 

Soon after the departures, reports emerged that the superalignment team had also been disbanded. OpenAI, however, has gone on undeterred, continuing its flurry of product releases while sharing in-house research and efforts on the trust and safety front. It has even formed a new safety and security committee, which is in the process of reviewing the company’s processes and safeguards. 

The committee is led by Bret Taylor (OpenAI board chair and co-founder of customer service startup Sierra AI), Adam D’Angelo (CEO of Quora and AI model aggregator app Poe), Nicole Seligman (former executive vice president and global general counsel of Sony Corporation) and Sam Altman (current OpenAI CEO and one of its co-founders).

News Today

Share
Published by
News Today

Recent Posts

Kareena Kapoor’s Next Untitled Film With Meghna Gulzar Gets Prithviraj Sukumaran On Board

Kareena Kapoor is working with Raazi director Meghna Gulzar for her next film. The project,…

2 weeks ago

Purdue basketball freshman Daniel Jacobsen injured vs Northern Kentucky

2024-11-09 15:00:03 WEST LAFAYETTE -- Daniel Jacobsen's second game in Purdue basketball's starting lineup lasted…

2 weeks ago

Rashida Jones honors dad Quincy Jones with heartfelt tribute: ‘He was love’

2024-11-09 14:50:03 Rashida Jones is remembering her late father, famed music producer Quincy Jones, in…

2 weeks ago

Nosferatu Screening at Apollo Theatre Shows Student Interest in Experimental Cinema – The Oberlin Review

2024-11-09 14:40:03 A silent German expressionist film about vampires accompanied by Radiohead’s music — what…

2 weeks ago

What Are Adaptogens? Find Out How These 3 Herbs May Help You Tackle Stress Head-On

Let's face it - life can be downright stressful! With everything moving at breakneck speed,…

2 weeks ago

The new Mac Mini takes a small step towards upgradeable storage

Apple’s redesigned Mac Mini M4 has ditched the previous M2 machine’s SSD that was soldered…

2 weeks ago