In October 2023, OpenAI announced a new initiative to develop its risk preparedness approach. This initiative is aligned with OpenAI’s mission to build safe Artificial General Intelligence (AGI) by addressing the broad spectrum of safety risks related to AI. OpenAI believes that frontier AI models, or future technology exceeding the capabilities of the top-tier models currently available, have the potential to bring myriad benefits to humanity. However, OpenAI is also aware of the increasingly severe risks that these models could pose.
To manage these risks, OpenAI is establishing a new team called Preparedness. This team, led by Aleksander Madry, will focus on evaluating the capabilities of frontier AI systems, conducting internal red teaming exercises, and assessing the potential dangers of these systems when misused, now and in the future.
The Preparedness team will track, evaluate, and forecast catastrophic risks in several categories, including individualized persuasion, cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats, and autonomous replication and adaptation (ARA). The team will also develop and maintain a Risk-Informed Development Policy (RDP), which will detail OpenAI’s approach to developing rigorous evaluations and monitoring frontier model capabilities, creating a spectrum of protective actions, and establishing a governance structure for accountability and oversight across the development process.

To reinforce its Preparedness team, OpenAI is also launching the AI Preparedness Challenge for catastrophic misuse prevention. This challenge aims to identify less obvious areas of potential concern and to build the team. OpenAI will offer $25,000 in API credits to up to 10 top submissions, publish novel ideas and entries, and scout for Preparedness candidates among the challenge’s top contenders.
As frontier AI technologies evolve, OpenAI’s initiative underscores the need for stringent risk management strategies in the AI sector. OpenAI’s focus on preparedness is essential to mitigating the potential catastrophic misuse of these powerful tools.