OpenAI

About OpenAI

Empowering humanity through safe AI innovation

🏒 TechπŸ‘₯ 1001+ employeesπŸ“… Founded 2015πŸ“ Mission District, San Francisco, CAπŸ’° $68.9b⭐ 4.2
B2CB2BArtificial IntelligenceEnterpriseSaaSAPIDevOps

Key Highlights

  • Headquartered in San Francisco, CA with 1,001+ employees
  • $68.9 billion raised in funding from top investors
  • Launched ChatGPT, gaining 1 million users in 5 days
  • 20-week paid parental leave and unlimited PTO policy

OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...

🎁 Benefits

OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...

🌟 Culture

OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...

Skills & Technologies

Overview

OpenAI is hiring a Manager for the Protection Scientist Engineer team to lead efforts in identifying and investigating product misuse. You'll work with data science and machine learning to develop robust safety systems. This role requires experience in interdisciplinary collaboration and policy development.

Job Description

Who you are

You have a strong background in data science and machine learning, with experience in developing systems that proactively identify and mitigate abuse in technology products. Your leadership skills enable you to manage a small team effectively, guiding them in their investigations and ensuring robust monitoring systems are in place. You understand the importance of cross-functional collaboration, working closely with product, policy, and engineering teams to create data-backed solutions.

You possess excellent problem-solving abilities and can navigate complex situations, especially when responding to critical escalations. Your experience in policy and protocol development allows you to contribute to the creation of effective product policies that enhance user safety. You are committed to OpenAI's mission of ensuring that artificial intelligence benefits all of humanity, and you are passionate about leveraging technology to address real-world challenges.

What you'll do

As the Manager of Protection Scientist Engineers, you will lead a team responsible for designing and building systems that identify and enforce measures against product abuse. You will ensure that robust abuse monitoring is in place for both new and existing products, and you will prototype systems to defend against high-risk harms. Your role will involve investigating critical escalations and collaborating with various teams to enhance safety measures.

You will be responsible for developing and implementing strategies that support the team's mission, ensuring that your team has the resources and guidance needed to succeed. You will also engage in ongoing learning and adaptation, iteratively updating systems based on insights gained from investigations. Your leadership will foster a culture of safety and accountability within the team, driving impactful results that align with OpenAI's goals.

What we offer

At OpenAI, you will be part of a mission-driven organization that values innovation and collaboration. We offer a competitive salary and benefits package, along with opportunities for professional growth and development. You will work in a supportive environment that encourages creativity and critical thinking, allowing you to make a meaningful impact in the field of artificial intelligence. Join us in shaping the future of technology and ensuring that AI is used responsibly and ethically.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at OpenAI.

✨

Similar Jobs You Might Like

Based on your interests and this role

OpenAI

Protection Scientist Engineer

OpenAIβ€’πŸ“ London

OpenAI is hiring a Protection Scientist Engineer to design and build systems for identifying and enforcing abuse on their products. You'll work with data science and machine learning to enhance safety measures. This role requires interdisciplinary expertise and collaboration across teams.

Mid-Level
4 months ago
Uber

Security Engineer

Uberβ€’πŸ“ San Francisco - On-Site

Uber is hiring a Senior Security Investigator to lead complex security investigations and enhance automation in their CyberSecurity Incident Response team. You'll work with forensic analysis and incident response strategies in San Francisco.

πŸ›οΈ On-SiteSenior
3 months ago
OpenAI

Security Engineer

OpenAIβ€’πŸ“ San Francisco - On-Site

OpenAI is hiring a Security Engineer focused on Insider Threat Detection & Response to innovate on security infrastructure and safeguard sensitive assets. You'll work with technologies like AWS and Python in San Francisco.

πŸ›οΈ On-SiteMid-Level
3 months ago
OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Research Lead for Chemical & Biological Risk to design and implement mitigation strategies for AI safety. You'll oversee safeguards against chemical and biological misuse across OpenAI’s products. This role requires technical depth and decisive leadership.

Lead
5 months ago
Brex

Engineering Manager

Brexβ€’πŸ“ San Francisco - On-Site

Brex is seeking an Engineering Manager for their Security Engineering team to lead and support Application Security and Security Operations. You'll work on building a world-class security program while ensuring a secure environment for customers and staff.

πŸ›οΈ On-SiteLead
1d ago