Key Facts
- ✓ OpenAI is hiring a Head of Preparedness to anticipate potential harms and guide safety strategy.
- ✓ The role offers a salary of $555K plus equity and leads the technical execution of the Preparedness framework.
- ✓ CEO Sam Altman acknowledged the 'potential impact of models on mental health' as a significant challenge in 2025.
- ✓ The hiring follows the reassignment of former Head Aleksander Madry in July 2024 and the departure of Joaquin Quinonero Candela from the team in July 2025.
Quick Summary
OpenAI is actively recruiting a new Head of Preparedness to strengthen its internal safety protocols. This executive role is tasked with anticipating potential harms caused by AI models and mitigating how these systems can be abused. The hiring initiative arrives at the conclusion of a year marked by significant scrutiny regarding the mental health impacts of ChatGPT, including several wrongful death lawsuits.
In a public statement, CEO Sam Altman described the position as critical for the current technological landscape. The new hire will be responsible for executing the company's Preparedness framework, which tracks frontier capabilities that pose risks of severe harm. The role is described as high-pressure, with the successful candidate expected to address complex challenges immediately upon joining the team.
The Role and Responsibilities
The primary objective of the new Head of Preparedness is to guide OpenAI's safety strategy by identifying and preparing for future risks. According to the job listing, this individual will lead the technical strategy and execution of the company's Preparedness framework. This framework is the organization's official approach to tracking and preparing for frontier capabilities that could create new risks of severe harm.
The compensation package for this position is substantial, reflecting the difficulty and importance of the work. The listing indicates a base salary of $555,000, in addition to equity grants. Sam Altman characterized the job as "a stressful job and you'll jump into the deep end pretty much immediately." This suggests the company is looking for a candidate ready to tackle urgent issues without a long ramp-up period.
"The potential impact of models on mental health was something we saw a preview of in 2025."
— Sam Altman, OpenAI CEO
Context of Mental Health Concerns 🧠
The hiring push follows a turbulent year for the company regarding the social impact of its products. OpenAI has faced numerous accusations concerning ChatGPT's effects on users' mental health. These concerns escalated to the point of involving wrongful death lawsuits, placing increased pressure on the company to demonstrate proactive safety measures.
Sam Altman acknowledged these specific issues in a post on X regarding the new position. He stated that the "potential impact of models on mental health was something we saw a preview of in 2025." He further noted that other "real challenges" have arisen alongside the increasing capabilities of these models. The new role is intended to address these specific vulnerabilities.
Recent Leadership Changes
This vacancy is the latest development in a period of significant turnover within OpenAI's safety leadership. Over the last couple of years, the company's safety teams have undergone substantial changes. The previous Head of Preparedness, Aleksander Madry, was reassigned in July 2024.
Following Madry's reassignment, Sam Altman announced that the responsibilities would be distributed among executives Joaquin Quinonero Candela and Lilian Weng. However, the leadership structure remained unstable. Lilian Weng departed the company a few months later. By July 2025, Joaquin Quinonero Candela had also moved on from the preparedness team to lead recruiting at OpenAI, leaving the safety leadership position open for the current search.
"Is a critical role at an important time."
— Sam Altman, OpenAI CEO
"A stressful job and you'll jump into the deep end pretty much immediately."
— Sam Altman, OpenAI CEO
"Will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm."
— OpenAI Job Listing


