
About OpenAI
Empowering humanity through safe AI innovation
Key Highlights
- Headquartered in San Francisco, CA with 1,001+ employees
- $68.9 billion raised in funding from top investors
- Launched ChatGPT, gaining 1 million users in 5 days
- 20-week paid parental leave and unlimited PTO policy
OpenAI is a leading AI research and development platform headquartered in the Mission District of San Francisco, CA. With over 1,001 employees, OpenAI has raised $68.9 billion in funding and is known for its groundbreaking products like ChatGPT, which gained over 1 million users within just five day...
🎁 Benefits
OpenAI offers flexible work hours and encourages unlimited paid time off, promoting at least 4 weeks of vacation per year. Employees enjoy comprehensi...
🌟 Culture
OpenAI's culture is centered around its mission to ensure that AGI benefits all of humanity. The company values transparency and ethical consideration...
Skills & Technologies
Overview
OpenAI is hiring a Protection Scientist Engineer to design and build systems for identifying and enforcing abuse on their products. You'll work with data science and machine learning to enhance safety measures. This role requires interdisciplinary expertise and collaboration across teams.
Job Description
Who you are
You have a strong background in data science and machine learning, with experience in developing systems that proactively identify and mitigate abuse in technology products. Your investigative skills allow you to analyze complex data and understand the nuances of product misuse, enabling you to contribute to effective policy and protocol development. You thrive in cross-functional environments, collaborating with product, policy, and engineering teams to ensure robust safety measures are in place. Your ability to communicate complex concepts clearly makes you an effective team member, and you are comfortable participating in on-call rotations to address urgent escalations.
Desirable
Experience with AI technologies and a passion for ethical AI deployment are highly valued. Familiarity with product safety protocols and abuse monitoring systems will give you an edge in this role.
What you'll do
As a Protection Scientist Engineer, you will be responsible for designing and implementing systems that monitor for abuse across OpenAI's products. This involves developing robust monitoring frameworks for new and existing products, ensuring that safety measures are continuously updated based on real-world data. You will investigate critical escalations that may not be captured by existing safety systems, requiring a deep understanding of both the products and the data involved. Your role will also include collaborating with various teams to develop data-backed product policies that enhance user safety and product integrity.
What we offer
OpenAI provides a collaborative and innovative work environment where you can contribute to meaningful projects that impact the future of technology. You will have the opportunity to work with a diverse team of experts dedicated to ensuring that artificial intelligence benefits all of humanity. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at OpenAI.
Similar Jobs You Might Like
Based on your interests and this role

Manager
OpenAI is hiring a Manager for the Protection Scientist Engineer team to lead efforts in identifying and investigating product misuse. You'll work with data science and machine learning to develop robust safety systems. This role requires experience in interdisciplinary collaboration and policy development.

Security Engineer
OpenAI is hiring a Security Engineer focused on Detection and Response to innovate and secure transformational AI technologies. You'll work with AWS, Linux, and various security tools in a hybrid model based in London.

Integrity Science Engineer
Meta is hiring an Integrity Science Engineer to tackle complex problems and improve community experiences on Facebook and Instagram. You'll work with big data and detection systems, leveraging skills in data analysis and machine learning.

Security Engineer

Software Engineering
Anthropic is hiring a Software Engineer for their Safeguards Infrastructure team to build foundational systems for AI safety and oversight. You'll work with Python to develop robust mechanisms for monitoring and preventing misuse of AI models. This role requires 4-10+ years of experience in software engineering.