OpenAI has announced the establishment of a safety and security committee and the commencement of training a new AI model to replace the GPT-4 system used in its ChatGPT chatbot.
In a blog post on Tuesday, the San Francisco startup stated that the committee will provide advice to the board on significant safety and security decisions related to the company’s projects and operations.
This move comes amidst controversy surrounding OpenAI’s AI safety, triggered by the resignation of researcher Jan Reike and criticism of the company for prioritizing flashy products over safety measures. Co-founder and chief scientist Ilya Sutskever also resigned, leading to the disbandment of the “Super Alignment” team.
OpenAI mentioned that it has started training its next-generation models, which are known for their industry-leading capability and safety. However, the company did not directly address the controversies surrounding its AI safety measures, stating their openness to discussions at this crucial juncture.
AI models are advanced systems trained on extensive datasets to produce text, images, videos, and human-like conversations. The frontier models represent the most powerful and cutting-edge AI technologies.
The safety committee comprises several OpenAI insiders, including CEO Sam Altman, chairman Bret Taylor, and technical and policy experts from the company, as well as Quora CEO Adam D’Angelo and former Sony general counsel Nicole Seligman. Their initial task is to assess and improve OpenAI’s procedures and safeguards and provide recommendations to the board within 90 days, after which the company will disclose the adopted recommendations in line with safety and security standards.
Source: www.theguardian.com