
About OpenAI
Empowering humanity through safe AI innovation
Key Highlights
- Headquartered in San Francisco, CA with 1,001+ employees
- $68.9 billion raised in funding from top investors
- Launched ChatGPT, gaining 1 million users in 5 days
- 20-week paid parental leave and unlimited PTO policy
OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...
🎁 Benefits
OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...
🌟 Culture
OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...
Overview
OpenAI is hiring a Researcher for their Pretraining Safety team to develop safer AI models. You'll focus on identifying safety-relevant behaviors and designing architectures for safer behavior. This role requires expertise in AI safety and model evaluation.
Job Description
Who you are
You have a strong background in AI safety and model evaluation, with experience in identifying safety-relevant behaviors in machine learning models. Your expertise allows you to design architectures that prioritize safety from the outset, ensuring that models are built with safety in mind. You are familiar with the challenges of monitoring unsafe behaviors and have a proactive approach to mitigating risks during the training process.
You possess a deep understanding of how behaviors emerge in AI systems and can conduct foundational research to measure these behaviors reliably. Your collaborative spirit enables you to work effectively across teams, contributing to a culture of trust and transparency in AI development. You are committed to OpenAI's mission of building safe AGI and are eager to contribute to this important work.
What you'll do
In this role, you will be at the forefront of pioneering safety in AI models before they reach deployment. You will identify safety-relevant behaviors as they first emerge in base models, evaluating and reducing risks without waiting for full-scale training runs. Your work will involve designing architectures and training setups that make safer behavior the default, strengthening models by incorporating richer, earlier safety signals. You will collaborate with various teams within OpenAI's safety ecosystem, ensuring that safety considerations are integrated throughout the model development process.
What we offer
At OpenAI, you will be part of a mission-driven team dedicated to ensuring that AI technologies benefit society. We provide a supportive environment where you can grow your skills and contribute to groundbreaking research in AI safety. You will have the opportunity to work with leading experts in the field and engage in meaningful projects that shape the future of technology. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at OpenAI.
Similar Jobs You Might Like
Based on your interests and this role

Ai Research Engineer
OpenAI is hiring a Senior AI Research Engineer to advance AI safety and robustness. You'll work on critical research projects to ensure safe AGI deployment. This role requires a strong background in machine learning and safety research.

Research Scientist
OpenAI is hiring a Senior Research Scientist for their Safety Oversight team to advance AI safety research. You'll work on developing models to detect and mitigate AI misuse and misalignment. This position requires a strong background in machine learning and safety research.

Ai Research Engineer
Anthropic is hiring a Research Engineer for their ML Performance and Scaling team to ensure reliable and efficient training of AI models. You'll work with Python and machine learning frameworks like TensorFlow and PyTorch in San Francisco.

Ml Research Engineer
Apple is hiring an ML Research Engineer to lead the design and development of automated safety benchmarking methodologies for AI features. You'll work with Python and machine learning techniques to ensure safe and trustworthy AI experiences. This role requires strong analytical skills and experience in AI safety.

Technical Lead
OpenAI is hiring a Technical Lead for their Safety Research team to develop strategies addressing potential harms from AI misalignment. You'll work on advancing safety capabilities in AI models and systems. This role requires strong leadership and research skills.