OpenAI

About OpenAI

Empowering humanity through safe AI innovation

🏢 Tech👥 1001+ employees📅 Founded 2015📍 Mission District, San Francisco, CA💰 $68.9b4.2
B2CB2BArtificial IntelligenceEnterpriseSaaSAPIDevOps

Key Highlights

  • Headquartered in San Francisco, CA with 1,001+ employees
  • $68.9 billion raised in funding from top investors
  • Launched ChatGPT, gaining 1 million users in 5 days
  • 20-week paid parental leave and unlimited PTO policy

OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...

🎁 Benefits

OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...

🌟 Culture

OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...

Overview

OpenAI is hiring an AI Research Engineer for their Preparedness team to monitor and mitigate risks associated with frontier AI models. You'll work on empirical evaluations and contribute to AI safety initiatives. This role requires a strong background in research and engineering.

Job Description

Who you are

You have a strong background in research and engineering, particularly in the field of AI safety. Your experience includes designing and conducting evaluations that assess the capabilities and risks of advanced AI systems. You are passionate about ensuring that AI technologies are developed responsibly and safely, and you understand the implications of frontier AI on society. You are comfortable working in a fast-paced environment and can manage multiple threads of research simultaneously. You have excellent analytical skills and can synthesize complex information into actionable insights. You are a collaborative team player who enjoys working with cross-functional teams to achieve common goals.

What you'll do

In this role, you will be responsible for owning the scientific validity of frontier preparedness capability evaluations. You will design new evaluations grounded in real-world scenarios to assess the risks associated with advanced AI models. Your work will involve closely monitoring and predicting the evolving capabilities of these systems, with a focus on identifying potential misuse risks. You will collaborate with other researchers and engineers to develop concrete procedures and infrastructure to mitigate these risks. Your contributions will play a crucial role in shaping OpenAI's approach to AI safety and preparedness.

What we offer

At OpenAI, you will be part of a mission-driven organization that believes in the positive potential of AI. We offer a collaborative work environment where your contributions will have a meaningful impact on the future of technology. You will have the opportunity to work with some of the brightest minds in the field and engage in research that addresses some of the most pressing challenges in AI safety. We are committed to providing reasonable accommodations to applicants with disabilities and fostering an inclusive workplace culture.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at OpenAI.

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Research Scientist

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Research Scientist for their Frontier Red Team to evaluate and defend against emerging risks posed by advanced AI models. You'll focus on building a research program to understand these risks and develop necessary defenses. This role requires expertise in AI safety and risk assessment.

🏛️ On-SiteMid-Level
9h ago
OpenAI

Research Scientist

OpenAI📍 San Francisco

OpenAI is hiring a Researcher for their Pretraining Safety team to develop safer AI models. You'll focus on identifying safety-relevant behaviors and designing architectures for safer behavior. This role requires expertise in AI safety and model evaluation.

3 months ago
Anthropic

Research Scientist

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Senior Research Scientist for the Frontier Red Team to develop tools and frameworks for defending against advanced AI-enabled cyber threats. You'll work with cybersecurity and AI technologies in San Francisco.

🏛️ On-SiteSenior
9h ago
Anthropic

Ai Research Engineer

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Research Engineer for their Frontier Red Team to focus on the safety of autonomous AI systems. You'll work on building and evaluating model organisms and developing defensive agents. This role requires expertise in AI capabilities research and security.

🏛️ On-SiteMid-Level
9h ago
OpenAI

Ai Research Engineer

OpenAI📍 San Francisco

OpenAI is hiring a Senior AI Research Engineer to advance AI safety and robustness. You'll work on critical research projects to ensure safe AGI deployment. This role requires a strong background in machine learning and safety research.

Senior
2 years ago