OpenAI

About OpenAI

Empowering humanity through safe AI innovation

🏒 TechπŸ‘₯ 1001+ employeesπŸ“… Founded 2015πŸ“ Mission District, San Francisco, CAπŸ’° $68.9b⭐ 4.2
B2CB2BArtificial IntelligenceEnterpriseSaaSAPIDevOps

Key Highlights

  • Headquartered in San Francisco, CA with 1,001+ employees
  • $68.9 billion raised in funding from top investors
  • Launched ChatGPT, gaining 1 million users in 5 days
  • 20-week paid parental leave and unlimited PTO policy

OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...

🎁 Benefits

OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...

🌟 Culture

OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...

OpenAI

Technical Lead β€’ Lead

OpenAI β€’ San Francisco

Apply Now β†’

Overview

OpenAI is hiring a Technical Lead for their Safety Research team to develop strategies addressing potential harms from AI misalignment. You'll work on advancing safety capabilities in AI models and systems. This role requires strong leadership and research skills.

Job Description

Who you are

You have a strong background in AI safety research and a proven track record of leading teams towards innovative solutions. Your experience includes developing strategies that address potential harms from AI misalignment, ensuring that safety measures evolve alongside technological advancements. You are adept at setting ambitious goals and milestones for research directions, and you thrive in collaborative environments where you can foster a culture of trust and transparency.

You possess excellent communication skills, allowing you to articulate complex safety concepts to diverse stakeholders. Your ability to mentor and guide team members is complemented by your commitment to continuous improvement in safety practices. You understand the importance of human oversight in AI systems and are passionate about developing methods that enhance safety and robustness.

What you'll do

As a Technical Lead, you will spearhead the Safety Research team's initiatives, focusing on advancing capabilities for implementing robust, safe behavior in AI models. You will set north star goals and milestones for new research directions, developing challenging evaluations to track progress. Your role will involve exploratory research to improve safety common sense and generalizable reasoning, as well as creating new evaluations to detect misalignment or inner goals of AI systems.

You will collaborate closely with other teams to ensure that safety measures are integrated into the deployment of AI technologies. Your leadership will guide the team in addressing evolving risks and ensuring that our systems are robust against harmful misuse. You will play a critical role in shaping the future of AI safety, contributing to OpenAI's mission to build and deploy safe AGI.

What we offer

At OpenAI, you will be part of a team that is at the forefront of AI safety research, working on impactful projects that aim to benefit society. We are committed to providing reasonable accommodations to applicants with disabilities and fostering an inclusive work environment. Join us in shaping the future of technology and ensuring that the benefits of AI are widely shared.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at OpenAI.

✨

Similar Jobs You Might Like

Based on your interests and this role

OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Senior Research Scientist for their Safety Oversight team to advance AI safety research. You'll work on developing models to detect and mitigate AI misuse and misalignment. This position requires a strong background in machine learning and safety research.

Senior
1 year ago
Reflection

Ai Research Engineer

Reflectionβ€’πŸ“ San Francisco - On-Site

Reflection is seeking a Senior AI Research Engineer to lead safety evaluations for their AI models. You'll work with advanced methodologies in AI safety and contribute to the development of automated evaluation pipelines. This role requires a graduate degree in Computer Science or related fields and deep technical expertise in LLM safety.

πŸ›οΈ On-SiteSenior
1 month ago
OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Research Lead for Chemical & Biological Risk to design and implement mitigation strategies for AI safety. You'll oversee safeguards against chemical and biological misuse across OpenAI’s products. This role requires technical depth and decisive leadership.

Lead
5 months ago
OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Researcher for their Pretraining Safety team to develop safer AI models. You'll focus on identifying safety-relevant behaviors and designing architectures for safer behavior. This role requires expertise in AI safety and model evaluation.

3 months ago
Apple

Ml Research Engineer

Appleβ€’πŸ“ San Francisco

Apple is hiring an ML Research Engineer to lead the design and development of automated safety benchmarking methodologies for AI features. You'll work with Python and machine learning techniques to ensure safe and trustworthy AI experiences. This role requires strong analytical skills and experience in AI safety.

Mid-Level
1 month ago