OpenAI Establishes Safety Team for New AI Model Training

OpenAI has announced the formation of a “Safety and Security Committee” to oversee risk management as it trains its next major AI model, potentially advancing towards Artificial General Intelligence (AGI), which is anticipated to be more than five years away. This announcement comes on the heels of controversies involving leaked internal documents that revealed hostile company policies.


The committee, chaired by OpenAI directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman, will provide guidance on AI safety measures to the entire board of directors. This initiative reflects OpenAI’s commitment to developing AI responsibly and addressing the risks associated with powerful AI systems.


In the AI industry, a “frontier model” refers to cutting-edge AI systems designed to push the boundaries of current capabilities. Unlike narrow AI, which is specialized for specific tasks, AGI aims to perform any intellectual task that a human can do, even those it hasn’t been explicitly trained for.


The newly formed Safety and Security Committee will focus on a comprehensive range of safety measures. These include alignment research to ensure AI systems’ goals align with human values, safeguarding children from AI misuse, maintaining the integrity of elections against AI-related threats, evaluating the societal impacts of AI, and enforcing robust security measures. These processes and safeguards were outlined in a safety update released on May 21.