Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏢 Tech👥 1001+ employees📅 Founded 2021📍 SoMa, San Francisco, CA💰 $29.3b4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Applied Scientist Mid-Level

AnthropicSan Francisco

Posted 14h agoMid-LevelApplied Scientist📍 San Francisco📍 New York💰 $320,000 - $405,000 / yearly
Apply Now →

Skills & Technologies

Overview

Anthropic is seeking an Applied Safety Research Engineer to develop methods for evaluating AI safety. You'll work with machine learning and Python to design experiments that improve model evaluations. This role requires a research-oriented mindset and experience in applied ML.

Job Description

Who you are

You have a strong background in applied machine learning and engineering, with experience in designing experiments that enhance evaluation quality. You understand the importance of creating representative test data and simulating realistic user behavior to ensure model safety. Your analytical skills allow you to identify gaps in evaluation coverage and inform necessary improvements. You are comfortable working at the intersection of research and engineering, and you thrive in collaborative environments where you can contribute to meaningful AI safety initiatives.

Desirable

Experience with safety evaluations in AI systems is a plus, as well as familiarity with user behavior analysis and grading accuracy validation. You are passionate about ensuring AI systems are safe and beneficial for users and society.

What you'll do

In this role, you will design and run experiments aimed at improving the quality of AI safety evaluations. You will develop methods to generate representative test data and simulate realistic user behavior, which are crucial for validating grading accuracy. Your work will involve analyzing how various factors impact model safety behavior, including multi-turn conversations and user diversity. You will also be responsible for productionizing successful research into evaluation pipelines that run during model training and launch, directly influencing how Anthropic understands and enhances the safety of its models.

What we offer

Anthropic provides a collaborative work environment with a focus on building beneficial AI systems. You will have access to competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. Our office in San Francisco is designed to foster collaboration among colleagues, and we are committed to creating a supportive workplace culture that values your contributions.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Machine Learning Engineer

Anthropic📍 San Francisco

Anthropic is hiring ML/Research Engineers to develop systems that detect and mitigate misuse of AI technologies. You'll work with Python and machine learning frameworks like TensorFlow and PyTorch. This role requires experience in building classifiers and monitoring systems for AI safety.

Mid-Level
14h ago
Anthropic

Ai Research Engineer

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Privacy Research Engineer to design and implement privacy-preserving techniques for AI systems. You'll work with Python and ML frameworks like PyTorch and JAX in San Francisco. This position requires experience in privacy-preserving machine learning.

🏛️ On-SiteMid-Level
14h ago
Apple

Ml Research Engineer

Apple📍 San Francisco

Apple is hiring an ML Research Engineer to lead the design and development of automated safety benchmarking methodologies for AI features. You'll work with Python and machine learning techniques to ensure safe and trustworthy AI experiences. This role requires strong analytical skills and experience in AI safety.

Mid-Level
1 month ago
OpenAI

Research Scientist

OpenAI📍 San Francisco

OpenAI is hiring a Senior Research Scientist for their Safety Oversight team to advance AI safety research. You'll work on developing models to detect and mitigate AI misuse and misalignment. This position requires a strong background in machine learning and safety research.

Senior
1 year ago
Anthropic

Red Team Engineer

Anthropic📍 San Francisco - Hybrid

Anthropic is hiring a Staff Red Team Engineer to ensure the safety of AI systems by uncovering vulnerabilities through adversarial testing. You'll work with AWS and Python to simulate sophisticated threat actors. This role requires experience in security engineering and a strong understanding of AI systems.

🏢 HybridStaff
14h ago