OpenAI

About OpenAI

Empowering humanity through safe AI innovation

🏢 Tech👥 1001+ employees📅 Founded 2015📍 Mission District, San Francisco, CA💰 $68.9b4.2
B2CB2BArtificial IntelligenceEnterpriseSaaSAPIDevOps

Key Highlights

  • Headquartered in San Francisco, CA with 1,001+ employees
  • $68.9 billion raised in funding from top investors
  • Launched ChatGPT, gaining 1 million users in 5 days
  • 20-week paid parental leave and unlimited PTO policy

OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...

🎁 Benefits

OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...

🌟 Culture

OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...

OpenAI

Ai Research Engineer Senior

OpenAISan Francisco

Apply Now →

Overview

OpenAI is hiring a Senior AI Research Engineer to advance AI safety and robustness. You'll work on critical research projects to ensure safe AGI deployment. This role requires a strong background in machine learning and safety research.

Job Description

Who you are

You have a strong background in AI safety and robustness, with experience in safety research that informs your approach to developing safe AGI. Your passion for ensuring that AI systems are beneficial and trustworthy drives your work, and you are committed to addressing the challenges that arise as AI technology evolves. You possess a deep understanding of machine learning principles and are adept at leveraging them to create robust AI models that can withstand adversarial conditions.

You have a proven track record of conducting impactful research in the field of AI safety, demonstrating your ability to set directions for research initiatives that align with OpenAI's mission. Your expertise in implementing safety policies without compromising model capabilities showcases your innovative thinking and problem-solving skills. You are comfortable navigating complex safety challenges and are dedicated to fostering a culture of trust and transparency in AI deployment.

What you'll do

In this role, you will lead research projects aimed at enhancing the safety and robustness of AI systems. You will collaborate with cross-functional teams to develop methodologies that enforce nuanced safety policies while maintaining model performance. Your work will involve addressing privacy and security risks associated with AI deployment, ensuring that the systems we build are not only powerful but also responsible.

You will play a critical role in shaping the future of AI safety at OpenAI, contributing to the development of frameworks that guide the safe deployment of AI technologies. Your research will inform best practices and strategies for mitigating adversarial threats, ultimately helping to create AI systems that are aligned with human values and societal needs. You will also engage with the broader research community to share findings and collaborate on initiatives that advance the field of AI safety.

What we offer

At OpenAI, you will be part of a mission-driven team that is dedicated to building safe and beneficial AI. We offer a collaborative work environment where your contributions will have a significant impact on the future of technology. You will have access to cutting-edge resources and support for your professional development, allowing you to grow your expertise in AI safety and research.

We encourage you to apply even if your experience doesn't match every requirement. Join us in shaping the future of technology and ensuring that AI is used responsibly and safely.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at OpenAI.

Similar Jobs You Might Like

Based on your interests and this role

OpenAI

Research Scientist

OpenAI📍 San Francisco

OpenAI is hiring a Researcher for their Pretraining Safety team to develop safer AI models. You'll focus on identifying safety-relevant behaviors and designing architectures for safer behavior. This role requires expertise in AI safety and model evaluation.

3 months ago
OpenAI

Research Scientist

OpenAI📍 San Francisco

OpenAI is hiring a Senior Research Scientist for their Safety Oversight team to advance AI safety research. You'll work on developing models to detect and mitigate AI misuse and misalignment. This position requires a strong background in machine learning and safety research.

Senior
1 year ago
Apple

Ml Research Engineer

Apple📍 San Francisco

Apple is hiring an ML Research Engineer to lead the design and development of automated safety benchmarking methodologies for AI features. You'll work with Python and machine learning techniques to ensure safe and trustworthy AI experiences. This role requires strong analytical skills and experience in AI safety.

Mid-Level
1 month ago
OpenAI

Ai Research Engineer

OpenAI📍 San Francisco - Hybrid

OpenAI is hiring a Research Engineer / Research Scientist for their Post-Training team to improve pre-trained models for ChatGPT and other products. You'll work with machine learning technologies and collaborate with research and product teams. This role requires strong ML engineering skills and research experience.

🏢 HybridMid-Level
2 months ago
OpenAI

Technical Lead

OpenAI📍 San Francisco

OpenAI is hiring a Technical Lead for their Safety Research team to develop strategies addressing potential harms from AI misalignment. You'll work on advancing safety capabilities in AI models and systems. This role requires strong leadership and research skills.

Lead
4 months ago