Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏒 TechπŸ‘₯ 1001+ employeesπŸ“… Founded 2021πŸ“ SoMa, San Francisco, CAπŸ’° $29.3b⭐ 4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Research Scientist β€’ Mid-Level

Anthropic β€’ San Francisco

Posted 8h agoMid-LevelResearch ScientistπŸ“ San FranciscoπŸ“ New YorkπŸ’° $300,000 - $320,000 / yearly
Apply Now β†’

Overview

Anthropic is seeking a Biological Safety Research Scientist to design and develop safety systems for AI. You'll collaborate with experts to ensure responsible AI safety in the biological domain. This role requires a strong background in biological sciences.

Job Description

Who you are

You are a biological scientist with a strong understanding of safety mechanisms in AI systems. You have experience in designing and executing capability evaluations to assess the performance of AI models. Your background allows you to translate complex biosecurity concepts into practical technical safeguards. You thrive in collaborative environments, working closely with threat modeling experts and machine learning engineers to develop robust safety systems. You are committed to ensuring that AI technologies are safe and beneficial for society. You understand the balance between advancing legitimate life sciences research and preventing misuse by sophisticated threat actors.

Desirable

Experience in AI safety or biosecurity is a plus. Familiarity with machine learning concepts and practices will enhance your ability to contribute effectively to the team. You are open to learning and adapting to new challenges in the rapidly evolving field of AI safety.

What you'll do

In this role, you will design and execute capability evaluations to assess the capabilities of new AI models. You will collaborate closely with internal and external experts to develop training data for safety systems, ensuring they are optimized for robustness against adversarial attacks while maintaining low false-positive rates for legitimate researchers. You will analyze the performance of safety systems and contribute to the development of oversight mechanisms that align with Anthropic's mission of creating reliable and interpretable AI systems. Your work will directly impact how frontier AI models handle dual-use biological knowledge, shaping the future of responsible AI safety.

What we offer

At Anthropic, we provide competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. You will work in a collaborative office space in San Francisco, surrounded by a team of dedicated researchers and engineers. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds in our mission to build beneficial AI systems.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

✨

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Applied Scientist

Anthropicβ€’πŸ“ San Francisco

Anthropic is seeking an Applied Safety Research Engineer to develop methods for evaluating AI safety. You'll work with machine learning and Python to design experiments that improve model evaluations. This role requires a research-oriented mindset and experience in applied ML.

Mid-Level
7h ago
OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Research Lead for Chemical & Biological Risk to design and implement mitigation strategies for AI safety. You'll oversee safeguards against chemical and biological misuse across OpenAI’s products. This role requires technical depth and decisive leadership.

Lead
5 months ago
OpenAI

Research Scientist

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Senior Research Scientist for their Safety Oversight team to advance AI safety research. You'll work on developing models to detect and mitigate AI misuse and misalignment. This position requires a strong background in machine learning and safety research.

Senior
1 year ago
OpenAI

Technical Lead

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Technical Lead for their Safety Research team to develop strategies addressing potential harms from AI misalignment. You'll work on advancing safety capabilities in AI models and systems. This role requires strong leadership and research skills.

Lead
4 months ago
Anthropic

Research Scientist

Anthropicβ€’πŸ“ San Francisco - On-Site

Anthropic is seeking a Research Scientist to join their Life Science team, focusing on developing AI systems for biological research. You'll leverage your expertise in biology and machine learning to create evaluation frameworks and improve model performance.

πŸ›οΈ On-Site
7h ago