Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏒 TechπŸ‘₯ 1001+ employeesπŸ“… Founded 2021πŸ“ SoMa, San Francisco, CAπŸ’° $29.3b⭐ 4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Red Team Engineer β€’ Staff

Anthropic β€’ San Francisco - Hybrid

Apply Now β†’

Skills & Technologies

Overview

Anthropic is hiring a Staff Red Team Engineer to ensure the safety of AI systems by uncovering vulnerabilities through adversarial testing. You'll work with AWS and Python to simulate sophisticated threat actors. This role requires experience in security engineering and a strong understanding of AI systems.

Job Description

Who you are

You have a strong background in security engineering, with experience in adversarial testing and vulnerability assessment β€” you've worked on uncovering vulnerabilities in complex systems and understand the unique challenges posed by advanced AI capabilities. Your expertise in Python allows you to develop creative attack scenarios and implement novel testing approaches effectively.

You are familiar with the full spectrum of potential abuse in AI systems, from account manipulation to payment fraud β€” you have a keen eye for detail and can simulate sophisticated threat actors who chain multiple attack vectors to achieve their objectives. Your understanding of traditional security practices complements your innovative approach to AI safety.

You thrive in collaborative environments and enjoy working with cross-functional teams β€” your communication skills enable you to articulate complex security concepts to both technical and non-technical stakeholders. You are passionate about the implications of AI safety and are committed to ensuring that AI systems are beneficial for society.

Desirable

Experience with cloud platforms like AWS is a plus, as is familiarity with machine learning frameworks β€” you understand how to leverage these technologies to enhance security measures. A background in research or a related field can also be beneficial, as it allows you to stay ahead of emerging threats and vulnerabilities.

What you'll do

In this role, you will conduct comprehensive adversarial testing across Anthropic’s product surfaces β€” you'll develop creative attack scenarios that combine multiple exploitation techniques to uncover vulnerabilities before they can be exploited by malicious actors. Your work will involve researching and implementing novel testing approaches for emerging capabilities, including agent systems and new interaction paradigms.

You will design and execute 'full kill chain' attacks that emulate sophisticated threat actors β€” your ability to think like an adversary will be crucial in identifying potential risks and vulnerabilities in our AI systems. You will collaborate closely with engineers and researchers to ensure that our products are safe and reliable.

Your responsibilities will also include documenting your findings and providing actionable recommendations to improve our security posture β€” you will play a key role in shaping the security strategies of our AI systems. You will stay updated on the latest trends in AI safety and security, ensuring that Anthropic remains at the forefront of this critical field.

What we offer

Anthropic is a public benefit corporation with a mission to create reliable and interpretable AI systems β€” we offer competitive compensation and benefits, including optional equity donation matching and generous vacation and parental leave. Our flexible working hours allow you to balance your professional and personal life effectively.

You will have the opportunity to work in a lovely office space in San Francisco, collaborating with a diverse team of researchers, engineers, and policy experts β€” we believe that a collaborative environment fosters innovation and creativity. Join us in our mission to build beneficial AI systems that are safe and reliable for users and society as a whole.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

✨

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Applied Scientist

Anthropicβ€’πŸ“ San Francisco

Anthropic is seeking an Applied Safety Research Engineer to develop methods for evaluating AI safety. You'll work with machine learning and Python to design experiments that improve model evaluations. This role requires a research-oriented mindset and experience in applied ML.

Mid-Level
9h ago
Workato

Security Engineer

Workatoβ€’πŸ“ Sofia

Workato is hiring a Security Engineer for their Red Team to enhance product security through advanced threat intelligence and adversary emulation. You'll work with frameworks like MITRE ATT&CK and contribute to security research. This role requires experience in security certifications and compliance frameworks.

Mid-Level
1d ago
Workato

Security Engineer

Workatoβ€’πŸ“ Lisbon

Workato is seeking a Security Engineer - Red Team to join their Product Security team. You'll be responsible for security testing, social engineering, and threat intelligence. This role requires advanced security certifications and experience with adversary emulation frameworks.

1d ago
Anthropic

Software Engineering

Anthropicβ€’πŸ“ San Francisco

Anthropic is seeking Software Engineers for their Safeguards team to develop safety mechanisms for AI systems. You'll work with Java and Python to build monitoring systems and abuse detection infrastructure. This role requires 5-10 years of experience in software engineering.

Mid-Level
9h ago
Anthropic

Ai Research Engineer

Anthropicβ€’πŸ“ San Francisco - Hybrid

Anthropic is seeking an AI Research Engineer to develop next-generation training environments for agentic AI systems. You'll work on reinforcement learning and collaborate across teams to push the boundaries of AI capabilities. This role requires a blend of research and engineering skills.

🏒 HybridMid-Level
9h ago