
About Anthropic
Building safe and reliable AI systems for everyone
Key Highlights
- Headquartered in SoMa, San Francisco, CA
- Raised $29.3 billion in funding, including $13 billion Series F
- Over 1,000 employees focused on AI safety and research
- Launched Claude, an AI chat assistant rivaling ChatGPT
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
🎁 Benefits
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
🌟 Culture
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...
Overview
Anthropic is seeking a Technical CBRN-E Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating the misuse of AI systems related to CBRN-E threats. This role requires deep expertise in chemical defense or biodefense.
Job Description
Who you are
You have a strong background in threat investigation, particularly in the context of Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats. Your expertise in either chemical defense or biodefense is critical, as you will be working at the intersection of AI safety and CBRN security. You are comfortable with the potential exposure to explicit content and are prepared to respond to escalations during weekends and holidays. You possess excellent analytical skills and a keen eye for detail, enabling you to conduct thorough investigations into potential misuse cases of AI systems.
You are a collaborative team player who thrives in a fast-paced environment. Your ability to communicate complex ideas clearly and effectively is essential, as you will work closely with researchers, engineers, and policy experts. You are committed to ensuring that AI systems are safe and beneficial for society, and you understand the implications of misuse in the context of advanced technologies.
Desirable
Experience in developing detection techniques and building defenses against threat actors is a plus. Familiarity with AI technologies and their potential vulnerabilities will enhance your effectiveness in this role. You are proactive in identifying potential threats and are skilled at developing strategies to mitigate risks associated with AI misuse.
What you'll do
In this role, you will be responsible for detecting and investigating attempts to misuse Anthropic's AI systems for developing weapons, synthesizing dangerous compounds, or creating biological harm. You will conduct thorough investigations into potential misuse cases, leveraging your specialized domain expertise to protect against serious threats. Your work will involve developing novel detection techniques and building robust defenses against threat actors.
You will collaborate with a diverse team of researchers and engineers to enhance the safety and security of AI systems. Your insights will contribute to the development of policies and practices that ensure the responsible use of AI technologies. You will also engage in ongoing research to stay ahead of emerging threats and vulnerabilities in the AI landscape.
What we offer
Anthropic is a public benefit corporation headquartered in San Francisco, offering competitive compensation and benefits. You will have access to optional equity donation matching, generous vacation and parental leave, and flexible working hours. Our office provides a collaborative environment where you can work closely with colleagues who share your commitment to building beneficial AI systems. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Anthropic.
Similar Jobs You Might Like
Based on your interests and this role

Technical Cyber Threat Investigator
Anthropic is seeking a Technical Cyber Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating the misuse of AI systems for malicious cyber operations. This role requires expertise in cybersecurity and AI safety.

Other Technical Roles
Anthropic is seeking a Technical Scaled Abuse Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating large-scale misuse of AI systems. This role requires strong analytical skills and experience in threat detection.

Research Scientist
Anthropic is seeking a Biological Safety Research Scientist to design and develop safety systems for AI. You'll collaborate with experts to ensure responsible AI safety in the biological domain. This role requires a strong background in biological sciences.

Threat Collections Engineer
Anthropic is hiring a Threat Collections Engineer to build infrastructure for threat discovery. You'll work with technologies like YARA, Python, and AWS to develop automated detection systems. This role requires experience in threat intelligence and data integration.

Applied Scientist
Anthropic is seeking an Applied Safety Research Engineer to develop methods for evaluating AI safety. You'll work with machine learning and Python to design experiments that improve model evaluations. This role requires a research-oriented mindset and experience in applied ML.