Thursday, July 4, 2024 05:58 PM
OpenAI establishes Safety and Security Committee to address AI safety concerns and collaborate with experts for enhancing safety measures.
OpenAI, a leading artificial intelligence research lab, has recently formed a Safety and Security Committee to address concerns surrounding the safety of its AI projects. The committee, led by CEO Sam Altman and comprised of key board members, aims to enhance safety practices as OpenAI prepares to train its upcoming AI model.
One of the main reasons for establishing this committee is the advanced capabilities of OpenAI's chatbots, developed in collaboration with Microsoft. These chatbots have raised alarms due to their human-like conversational skills and ability to generate images from text prompts.
Former members of OpenAI's Superalignment team, Ilya Sutskever and Jan Leike, have recently left the company. This team was responsible for ensuring AI alignment with its intended goals but was disbanded earlier this year.
The Safety and Security Committee, which includes Chief Scientist Jakub Pachocki and Head of Security Matt Knight, will work closely with external experts like Rob Joyce and John Carlin to evaluate and enhance safety measures within OpenAI over the next 90 days.
While details about OpenAI's new AI model remain confidential, the company aims to advance its systems towards achieving Artificial General Intelligence (AGI).
OpenAI's proactive approach in establishing the Safety and Security Committee reflects its commitment to ensuring the responsible development of AI technologies. By collaborating with industry experts and focusing on enhancing safety measures, OpenAI sets a precedent for ethical AI research and innovation.