• OpenAI CEO Sam Altman is recruiting a new head of preparedness, a position offering $555,000 in base pay plus equity.
  • Despite the high salary, experts suggest the role is exceptionally difficult to fill.
  • The primary challenge involves balancing AI safety risks with the company's aggressive product release schedule, which has included Sora 2 and Instant Checkout this year alone.
  • Maura Grossman, a research professor at the University of Waterloo, described the position as 'close to an impossible job' because the hire may need to advise slowing down development.

Quick Summary

OpenAI is currently seeking a new head of preparedness, a role that offers a base salary of $555,000 plus equity. While the compensation is competitive, industry experts suggest that filling the position may prove difficult due to the inherent conflicts between safety protocols and rapid product development.

The individual hired for this role will be responsible for balancing AI safety risks against the fast-paced release schedule of CEO Sam Altman. Altman has personally described the position as a 'stressful' job where the new hire will 'jump into the deep end pretty much immediately.' The challenge lies in the necessity to potentially slow down product rollouts to ensure safety, a task one expert described as 'rolling a rock up a steep hill.'

The Challenge of Balancing Speed and Safety

Recruiting for the head of preparedness role is proving to be trickier than it might seem, despite the significant financial package. The core of the difficulty lies in the job's requirement to navigate the tension between ensuring AI safety and supporting the company's aggressive growth strategy. Sam Altman has demonstrated a strong preference for releasing products at an exceptionally fast pace.

This year alone, OpenAI has launched several major products, including:

  • Sora 2 video app
  • Instant Checkout for ChatGPT
  • New AI models and developer tools
  • More advanced agent capabilities

According to Maura Grossman, a research professor at the University of Waterloo's School of Computer Science, the role is 'close to an impossible job.' She noted that the person in this position will likely need to tell Altman to slow down or that specific goals should not be met. Grossman characterized this effort as 'rolling a rock up a steep hill.' Despite these hurdles, the role remains a potential 'dream come true' for the right individual, given OpenAI's massive impact on daily life and the competitive compensation standards for AI talent.

This will be a stressful job, and you'll jump into the deep end pretty much immediately.
Sam Altman, OpenAI CEO

Qualifications and Ideal Candidate Profile

The job posting for the head of preparedness does not list traditional requirements such as a college degree or a minimum number of years of experience. Instead, OpenAI specifies that a candidate 'might thrive' if they possess specific attributes. These include experience leading technical teams and the ability to make high-stakes technical judgments under uncertainty.

Key qualifications sought include:

  • Deep technical expertise in machine learning, AI safety, evaluation, or security
  • Ability to align diverse stakeholders around safety decisions
  • Comfort with high-stakes judgment under uncertainty

There is differing opinions on the ideal background for the hire. Richard Lachman, a professor of digital media at Toronto Metropolitan University, suggests that a seasoned tech-industry executive might be a better fit than an academic. He argues that academic types tend to be more cautious and risk-averse. Lachman expects OpenAI to seek someone who can protect the company's public image regarding safety while allowing for continued rapid innovation. He described the ideal candidate not as a 'yes person,' but as someone who is 'on brand.'

Context: Previous Leadership and Safety Scrutiny

This recruitment drive follows the departure of Aleksander Madry, the former head of preparedness, who moved into a new role in July 2024. Madry left a vacancy within the company's Safety Systems team, which is responsible for building evaluations, safety frameworks, and safeguards for AI models. His background was in academia, which has sparked the debate regarding the ideal profile for his successor.

OpenAI's approach to safety has been a point of contention. The company has faced internal concerns that prompted resignations from prominent early employees, including a former head of its safety team. Externally, the company has been sued by individuals alleging that its technology reinforces delusions and drives harmful behavior.

In October, OpenAI acknowledged that some ChatGPT users have exhibited possible signs of mental health problems. The company stated it is working with mental health experts to improve the chatbot's responses to users showing signs of psychosis, mania, self-harm, suicide, or emotional attachment.

"The head of preparedness role is 'close to an impossible job,' because at times the person in it will likely need to tell Altman to slow down or that certain goals shouldn't be met."

Maura Grossman, Research Professor, University of Waterloo

"They'll be 'rolling a rock up a steep hill.'"

Maura Grossman, Research Professor, University of Waterloo

"This is not quite a 'yes person,' but somebody who's going to be on brand."

Richard Lachman, Professor, Toronto Metropolitan University

Frequently Asked Questions

Why is OpenAI's head of preparedness role difficult to fill?

The role is considered difficult because it requires balancing safety concerns with the company's rapid product release schedule. The hire may need to advise CEO Sam Altman to slow down development, which experts describe as a challenging task.

What are the requirements for the OpenAI preparedness role?

While there is no requirement for a college degree or specific years of experience, OpenAI seeks candidates with deep technical expertise in machine learning or AI safety, experience leading technical teams, and the ability to make high-stakes judgments under uncertainty.