
About OpenAI
Empowering humanity through safe AI innovation
Key Highlights
- Headquartered in San Francisco, CA with 1,001+ employees
- $68.9 billion raised in funding from top investors
- Launched ChatGPT, gaining 1 million users in 5 days
- 20-week paid parental leave and unlimited PTO policy
OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...
π Benefits
OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...
π Culture
OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...

Compliance Manager β’ Senior
OpenAI β’ San Francisco - Hybrid
Overview
OpenAI is hiring a Model Policy Manager to shape policy creation and development for AI safety. You'll work closely with research teams to ensure technologies align with human values. This role requires strong decision-making skills and the ability to develop cohesive policies.
Job Description
Who you are
You have a strong background in policy development and a deep understanding of AI safety β your experience allows you to identify and develop cohesive taxonomies of harm on high-risk topics with a sense of urgency. You can balance internal and external input in making complex decisions, carefully thinking through trade-offs while writing principled, enforceable policies based on core values. Your ability to communicate effectively with diverse stakeholders ensures that you can advocate for safety while fostering a culture of trust and transparency.
You thrive in collaborative environments, working closely with research teams to inform model training and policy creation β your insights help shape the future of AI technologies, ensuring they do not create harm. You are adept at navigating the complexities of AI safety, understanding the nuances of catastrophic risk, mental health, and teen safety. Your analytical mindset allows you to define evaluation criteria for foundational modelsβ ability to reason about safety, ensuring that policies are actionable and objective.
What you'll do
In this senior role, you will help shape policy creation and development at OpenAI, making a significant impact on AI safety. You will drive rapid policy taxonomy iteration based on data, ensuring that model behavior aligns with desired human values and norms. Your responsibilities will include co-designing policies with models and for models, focusing on key areas such as catastrophic risk and multimodal safety. You will work closely with research teams to ensure that policies are informed by the latest findings and best practices in AI safety.
You will be responsible for defining evaluation criteria for foundational models, ensuring that they can reason about safety effectively. Your role will involve making complex decisions that balance various inputs, and you will be expected to write clear and enforceable policies that reflect OpenAI's commitment to safety and transparency. You will also engage with external stakeholders to gather insights and feedback, ensuring that policies are comprehensive and well-informed.
What we offer
OpenAI offers a hybrid work model, allowing you to work three days in the office per week while providing relocation assistance for new employees. You will be part of a team that is at the forefront of AI safety, contributing to groundbreaking technologies that have the potential to solve immense global challenges. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at OpenAI.
Similar Jobs You Might Like
Based on your interests and this role

Product Manager
OpenAI is hiring a Product Policy Manager focused on Child Safety to develop and implement policies that govern the use of AI technologies. This role requires expertise in child safety and AI policy, working closely with product and legal teams.

Product Manager
OpenAI is hiring a Product Manager for the Model Behavior team to define and guide the future of AI model behavior. You'll collaborate with various teams to improve model capabilities and ensure user safety. This role requires a proactive approach to solving complex problems.

Product Manager
OpenAI is hiring a Product Manager for their API Model Behavior team to define and guide the future of AI model behavior in real-world applications. You'll collaborate with research and engineering teams to drive impactful improvements. This role requires a proactive, technically adept PM.

Product Manager
Descript is hiring a Product Manager to lead the AI Research and Enablement roadmap. You'll work with cutting-edge AI technology to enhance video editing capabilities. This position requires experience in product management and a strong understanding of AI and ML.

Engineering Manager
Baseten is hiring an Engineering Manager focused on Model Performance to lead a team optimizing ML model inference. You'll work with cutting-edge AI technologies in San Francisco.