Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏢 Tech👥 1001+ employees📅 Founded 2021📍 SoMa, San Francisco, CA💰 $29.3b4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Other Technical Roles Mid-Level

AnthropicSan Francisco

Posted 8h agoMid-LevelOther Technical Roles📍 San Francisco📍 New York💰 $230,000 - $290,000 / yearly
Apply Now →

Overview

Anthropic is seeking a Technical Scaled Abuse Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating large-scale misuse of AI systems. This role requires strong analytical skills and experience in threat detection.

Job Description

Who you are

You have a strong background in threat detection and investigation, ideally with experience in analyzing large-scale misuse of technology. Your analytical skills allow you to combine open-source research with internal data analysis to understand adversarial tactics. You are comfortable working with sensitive content and can handle escalations during weekends and holidays. You thrive in a collaborative environment and are committed to building safe and beneficial AI systems.

Desirable

Experience in cybersecurity or threat intelligence is a plus. Familiarity with AI systems and their potential vulnerabilities will help you excel in this role. You are proactive in developing abuse signals and tracking strategies to identify adversarial activity.

What you'll do

As a Technical Scaled Abuse Threat Investigator, you will be responsible for detecting and investigating large-scale abuse patterns, including model distillation, unauthorized API access, and fraud schemes. You will develop abuse signals and tracking strategies to proactively identify coordinated abuse networks. Your work will directly inform defenses against threat actors who seek to exploit Anthropic's products. You will collaborate with a team of researchers, engineers, and policy experts to enhance the safety and reliability of AI systems. Your insights will contribute to the overall mission of creating beneficial AI for society.

What we offer

Anthropic offers competitive compensation and benefits, including optional equity donation matching and generous vacation and parental leave. You will have flexible working hours and the opportunity to work in a collaborative office space in San Francisco or New York. Join a team that is dedicated to building reliable and interpretable AI systems, and make a meaningful impact in the field of artificial intelligence.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Technical Cyber Threat Investigator

Anthropic📍 San Francisco

Anthropic is seeking a Technical Cyber Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating the misuse of AI systems for malicious cyber operations. This role requires expertise in cybersecurity and AI safety.

Mid-Level
7h ago
Anthropic

Other Technical Roles

Anthropic📍 San Francisco

Anthropic is seeking a Technical CBRN-E Threat Investigator to join their Threat Intelligence team. You'll be responsible for detecting and investigating the misuse of AI systems related to CBRN-E threats. This role requires deep expertise in chemical defense or biodefense.

Mid-Level
7h ago
OpenAI

Software Engineering

OpenAI📍 San Francisco - On-Site

OpenAI is hiring a Software Engineer for their Scaled Abuse team to design and build next-generation anti-fraud systems. You'll work with technologies like Python and JavaScript to combat fraud effectively. This position requires 5+ years of software engineering experience.

🏛️ On-SiteMid-Level
2 years ago
Anthropic

Safeguards Analyst

Anthropic📍 San Francisco

Anthropic is hiring a Safeguards Analyst to focus on Account Abuse. You'll develop frameworks for detecting and preventing account abuse on the platform. This role requires a strong understanding of data analysis and operational capabilities.

Mid-Level
7h ago
Cloudflare

Trust & Safety Analyst

Cloudflare📍 Lisbon - Hybrid

Cloudflare is hiring a Senior Threat Investigations Analyst to enhance web safety through thorough investigations. You'll work in a hybrid environment based in Lisbon, focusing on making the internet safer for users. This role requires extensive experience in threat investigation.

🏢 HybridSenior
2w ago