Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏢 Tech👥 1001+ employees📅 Founded 2021📍 SoMa, San Francisco, CA💰 $29.3b4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Research Scientist Junior

AnthropicSan Francisco - On-Site

Posted 7h ago🏛️ On-SiteJuniorSeniorResearch Scientist📍 San Francisco💰 $350,000 - $850,000 / yearly
Apply Now →

Skills & Technologies

Overview

Anthropic is hiring a Research Scientist focused on Societal Impacts to analyze AI behavior and improve model safety. You'll work with observational tools and collaborate across teams. This role requires experience in machine learning systems.

Job Description

Who you are

You have a strong background in machine learning systems and a genuine interest in societal impacts research — you thrive in cross-functional environments and are adaptable to evolving team priorities. Whether you're at a junior or senior level, you are excited to contribute to the mission of creating reliable and beneficial AI systems. You possess analytical skills that allow you to use observational tools effectively, and you are comfortable collaborating with various teams, including fine-tuning, safeguards, policy, and interpretability.

You are eager to engage with real-world usage patterns and understand how people interact with AI systems like Claude. Your experience enables you to build evaluations that assess AI behavior across critical dimensions such as safety and quality of advice. You are not just a researcher; you are a proactive contributor who enjoys hands-on technical work while also helping to set research direction.

Desirable

Experience with tools like Clio is a plus, as is a background in evaluating AI systems for safety and alignment with ethical guidelines. You are familiar with the challenges of ensuring AI systems are interpretable and steerable, and you are committed to making a positive societal impact through your work.

What you'll do

In this role, you will analyze real-world usage patterns of Claude using observational tools like Clio. Your insights will help surface how users interact with the AI, informing improvements at the model level. You will build and run evaluations to assess Claude's behavior, focusing on key dimensions of its Constitution, including safety and quality of advice in high-stakes situations. Collaboration is key; you will partner closely with teams focused on fine-tuning and safeguards to ensure that the AI operates safely and effectively.

You will also contribute to the development of methodologies for evaluating AI behavior, ensuring that the systems we build are not only effective but also aligned with our mission of creating beneficial AI. Your role will involve regular communication with policy experts and engineers, fostering a collaborative environment where insights can be shared and acted upon.

What we offer

At Anthropic, we provide competitive compensation and benefits, including optional equity donation matching and generous vacation and parental leave. You will enjoy flexible working hours and a collaborative office space in San Francisco, where you can work closely with a diverse team of researchers, engineers, and policy experts. We are committed to creating a supportive environment that encourages innovation and growth, and we believe in the importance of making AI systems that are safe and beneficial for society.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Ai Research Engineer

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Research Engineer / Scientist to design and build infrastructure for studying AI's societal impacts. You'll work with machine learning systems and data processing tools in San Francisco.

🏛️ On-SiteMid-Level
6h ago
Anthropic

Research Scientist

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Research Scientist for their Frontier Red Team to evaluate and defend against emerging risks posed by advanced AI models. You'll focus on building a research program to understand these risks and develop necessary defenses. This role requires expertise in AI safety and risk assessment.

🏛️ On-SiteMid-Level
6h ago
Imbue

Ai Research Engineer

Imbue📍 San Francisco - On-Site

Imbue is hiring a Research Engineer to build AI systems that power their flagship product, Sculptor. You'll work on creating open coding agents to enhance software creation. This role requires a blend of research excellence and pragmatic engineering skills.

🏛️ On-SiteMid-Level
4 years ago
PagerDuty

Director Of Global Social Impact

PagerDuty📍 San Francisco - On-Site

PagerDuty is hiring a Director of Global Social Impact to lead their global impact strategy and nonprofit partnerships. You'll work closely with various teams to drive measurable outcomes for communities. This role requires strong leadership and strategic planning skills.

🏛️ On-SiteLead
2w ago
OpenAI

Research Scientist

OpenAI📍 San Francisco

OpenAI is hiring a Research Scientist to develop innovative machine learning techniques and advance the research agenda. You'll collaborate with peers across the organization and contribute to impactful research problems.

10 months ago