Categories: Technology

ChatGPT-maker OpenAI announces Safety and Security Committee: All details you need to know

ChatGPT-maker OpenAI has started the training process for its new generation of AI model, GPT-5. As the AI model training begins, the company on Tuesday announced the formation of a new Safety and Security Committee which will include major board members. OpenAI recently announced the dissolution of its Superalignment team which was formed to tackle long-term AI risk. However, now the new committee will work similarly as it will look after safety and security decisions for new projects and operations.

About Safety and Security Committee and its members

On Tuesday, OpenAI shared a blog post announcing the formation of a new Safety and Security Committee headed by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). OpenAI said that it is responsible for making recommendations to the company board regarding “critical safety and security decisions for all OpenAI projects.”

Additionally, the committee will include OpenAI’s technical and policy experts such as Aleksander Madry, John Schulman (Head of Safety Systems), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist). The members will oversee and thoroughly examine the company’s plans and will develop processes and safeguards in 90 days.

Why Safety and Security Committee?

OpenAI’s new safety committee will look over the company’s new projects and operations thoroughly to provide safety processes for the ethical use of their tools and technology. The company also highlights that they are moving towards the next level of capabilities of developing AGI and they want to focus on both safety as well as technology advancements. OpenAI said, “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

Within 90 days, OpenAI’s Safety and Security Committee will present the recommendations and processes for managing safety and security in their projects. This a major step for OpenAI as a Wired report highlighted that after the dissolution of the Superalignment team, the company’s safety and security measures have taken a backseat. On the other hand, AI researchers have also highlighted major concerns over the upcoming AI capabilities which require major attention when it comes to safeguarding technology and ethical use.

One more thing! We are now on WhatsApp Channels! Follow us there so you never miss any updates from the world of technology. ‎To follow the HT Tech channel on WhatsApp, click here to join now!

News Today

Recent Posts

Kareena Kapoor’s Next Untitled Film With Meghna Gulzar Gets Prithviraj Sukumaran On Board

Kareena Kapoor is working with Raazi director Meghna Gulzar for her next film. The project,…

2 weeks ago

Purdue basketball freshman Daniel Jacobsen injured vs Northern Kentucky

2024-11-09 15:00:03 WEST LAFAYETTE -- Daniel Jacobsen's second game in Purdue basketball's starting lineup lasted…

2 weeks ago

Rashida Jones honors dad Quincy Jones with heartfelt tribute: ‘He was love’

2024-11-09 14:50:03 Rashida Jones is remembering her late father, famed music producer Quincy Jones, in…

2 weeks ago

Nosferatu Screening at Apollo Theatre Shows Student Interest in Experimental Cinema – The Oberlin Review

2024-11-09 14:40:03 A silent German expressionist film about vampires accompanied by Radiohead’s music — what…

2 weeks ago

What Are Adaptogens? Find Out How These 3 Herbs May Help You Tackle Stress Head-On

Let's face it - life can be downright stressful! With everything moving at breakneck speed,…

2 weeks ago

The new Mac Mini takes a small step towards upgradeable storage

Apple’s redesigned Mac Mini M4 has ditched the previous M2 machine’s SSD that was soldered…

2 weeks ago