Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏢 Tech👥 1001+ employees📅 Founded 2021📍 SoMa, San Francisco, CA💰 $29.3b4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Machine Learning Engineer Mid-Level

AnthropicSan Francisco

Apply Now →

Overview

Anthropic is hiring ML/Research Engineers to develop systems that detect and mitigate misuse of AI technologies. You'll work with Python and machine learning frameworks like TensorFlow and PyTorch. This role requires experience in building classifiers and monitoring systems for AI safety.

Job Description

Who you are

You have a strong background in machine learning and AI, with experience in developing classifiers to detect misuse and anomalous behavior at scale. Your expertise includes building synthetic data pipelines and methods for evaluating AI systems effectively. You are familiar with threat modeling and have a keen understanding of the safety implications of AI technologies.

You possess excellent problem-solving skills and can work collaboratively with cross-functional teams to ensure the safety and reliability of AI systems. Your experience includes developing defenses against potential misuse and understanding the ethical implications of AI deployment. You are committed to creating beneficial AI systems that prioritize user wellbeing and societal safety.

What you'll do

As a member of the Safeguards ML team, you will develop classifiers to detect misuse and anomalous behavior at scale. This involves creating synthetic data pipelines for training classifiers and methods to automatically source representative evaluations. You will build systems to monitor for harms that span multiple exchanges, such as coordinated cyber attacks and influence operations, and develop new methods for aggregating and analyzing signals across contexts.

You will evaluate and improve the safety of agentic products by developing threat models and environments to test for agentic risks. Additionally, you will conduct research on automated red-teaming to enhance the robustness of AI systems against misuse. Your work will directly contribute to Anthropic's Responsible Scaling Policy commitments, ensuring that AI technologies are safe and beneficial for users and society.

What we offer

Anthropic is a public benefit corporation headquartered in San Francisco, offering competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. You will have the opportunity to collaborate with a diverse team of researchers, engineers, and policy experts dedicated to building beneficial AI systems. Join us in our mission to create reliable, interpretable, and steerable AI systems that prioritize safety and societal benefit.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Ai Research Engineer

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Privacy Research Engineer to design and implement privacy-preserving techniques for AI systems. You'll work with Python and ML frameworks like PyTorch and JAX in San Francisco. This position requires experience in privacy-preserving machine learning.

🏛️ On-SiteMid-Level
6h ago
Anthropic

Applied Scientist

Anthropic📍 San Francisco

Anthropic is seeking an Applied Safety Research Engineer to develop methods for evaluating AI safety. You'll work with machine learning and Python to design experiments that improve model evaluations. This role requires a research-oriented mindset and experience in applied ML.

Mid-Level
6h ago
Anthropic

Software Engineering

Anthropic📍 San Francisco

Anthropic is seeking Software Engineers for their Safeguards team to develop safety mechanisms for AI systems. You'll work with Java and Python to build monitoring systems and abuse detection infrastructure. This role requires 5-10 years of experience in software engineering.

Mid-Level
6h ago
Figma

Machine Learning Engineer

Figma📍 San Francisco - Remote

Figma is hiring a Machine Learning Engineer to design and build ML models for search and generative AI features. You'll work with Python and data pipelines to enhance user productivity. This role requires experience in applied machine learning.

🏠 RemoteMid-Level
3 months ago
Anthropic

Machine Learning Engineer

Anthropic📍 San Francisco - On-Site

Anthropic is seeking a Machine Learning Infrastructure Engineer to build and scale critical infrastructure for AI safety systems. You'll work with distributed systems and machine learning technologies to ensure reliable operations. This role requires experience in building scalable ML infrastructure.

🏛️ On-SiteMid-Level
6h ago