OpenAI

About OpenAI

Empowering humanity through safe AI innovation

🏒 TechπŸ‘₯ 1001+ employeesπŸ“… Founded 2015πŸ“ Mission District, San Francisco, CAπŸ’° $68.9b⭐ 4.2
B2CB2BArtificial IntelligenceEnterpriseSaaSAPIDevOps

Key Highlights

  • Headquartered in San Francisco, CA with 1,001+ employees
  • $68.9 billion raised in funding from top investors
  • Launched ChatGPT, gaining 1 million users in 5 days
  • 20-week paid parental leave and unlimited PTO policy

OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...

🎁 Benefits

OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...

🌟 Culture

OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...

OpenAI

Research Scientist β€’ Senior

OpenAI β€’ San Francisco

Apply Now β†’

Overview

OpenAI is hiring a Senior Research Scientist for their Safety Oversight team to advance AI safety research. You'll work on developing models to detect and mitigate AI misuse and misalignment. This position requires a strong background in machine learning and safety research.

Job Description

Who you are

You have a strong background in AI safety research, with a passion for ensuring that artificial intelligence is deployed responsibly and beneficially. Your experience includes developing and refining models that can effectively monitor AI systems for misuse and misalignment, and you are familiar with the latest advancements in machine learning and human-AI collaboration. You understand the importance of robustness in AI systems and are committed to fostering a culture of trust and transparency in AI deployment.

With a proven track record in safety research, you have the ability to set directions for impactful research projects that align with OpenAI's mission to build and deploy safe AGI. You are skilled in identifying emerging patterns of misuse and misalignment, and you thrive in collaborative environments where you can work closely with cross-functional teams to drive AI safety initiatives forward.

What you'll do

In this role, you will lead research efforts aimed at maintaining effective oversight of AI systems, ensuring they are safe and beneficial for society. You will develop and refine AI monitor models that detect and mitigate known and emerging patterns of misuse and misalignment. Your work will involve conducting cutting-edge research in areas such as human-AI collaboration and scalable oversight, contributing to the advancement of OpenAI's capabilities in AI safety.

You will collaborate with other researchers and engineers to implement novel methods for identifying and mitigating AI misuse, and you will play a critical role in defining the future of safe AI systems at OpenAI. Your contributions will directly impact the deployment of AI technologies, helping to ensure that they are used responsibly and ethically.

What we offer

At OpenAI, you will be part of a mission-driven team that is at the forefront of AI safety research. We offer a collaborative and inclusive work environment where your ideas and contributions are valued. You will have the opportunity to work on groundbreaking projects that aim to shape the future of technology and ensure that AI benefits everyone. We encourage you to apply even if your experience doesn't match every requirement, as we believe diverse teams build better products.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at OpenAI.

✨

Similar Jobs You Might Like

Based on your interests and this role

OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Researcher for their Pretraining Safety team to develop safer AI models. You'll focus on identifying safety-relevant behaviors and designing architectures for safer behavior. This role requires expertise in AI safety and model evaluation.

3 months ago
OpenAI

Technical Lead

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Technical Lead for their Safety Research team to develop strategies addressing potential harms from AI misalignment. You'll work on advancing safety capabilities in AI models and systems. This role requires strong leadership and research skills.

Lead
4 months ago
OpenAI

Ai Research Engineer

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Senior AI Research Engineer to advance AI safety and robustness. You'll work on critical research projects to ensure safe AGI deployment. This role requires a strong background in machine learning and safety research.

Senior
2 years ago
Apple

Ml Research Engineer

Appleβ€’πŸ“ San Francisco

Apple is hiring an ML Research Engineer to lead the design and development of automated safety benchmarking methodologies for AI features. You'll work with Python and machine learning techniques to ensure safe and trustworthy AI experiences. This role requires strong analytical skills and experience in AI safety.

Mid-Level
1 month ago
OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Research Lead for Chemical & Biological Risk to design and implement mitigation strategies for AI safety. You'll oversee safeguards against chemical and biological misuse across OpenAI’s products. This role requires technical depth and decisive leadership.

Lead
5 months ago