
About Anthropic
Building safe and reliable AI systems for everyone
Key Highlights
- Headquartered in SoMa, San Francisco, CA
- Raised $29.3 billion in funding, including $13 billion Series F
- Over 1,000 employees focused on AI safety and research
- Launched Claude, an AI chat assistant rivaling ChatGPT
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
🎁 Benefits
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
🌟 Culture
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...
Overview
Anthropic is seeking an AI Safety Fellow to accelerate AI safety research and foster talent. You'll work on empirical projects aligned with AI safety priorities. This role is open to promising technical talent regardless of previous experience.
Job Description
Who you are
You are a promising technical talent eager to contribute to the field of AI safety. You may not have previous experience in AI safety research, but you are motivated to learn and grow in this area. You are open to mentorship and collaboration with experienced researchers and engineers. You are excited about the opportunity to work on empirical projects that align with Anthropic's mission to create reliable and beneficial AI systems. You are adaptable and willing to engage with the broader AI safety research community. You are committed to producing meaningful outputs, such as research papers, during your fellowship.
What you'll do
As an AI Safety Fellow at Anthropic, you will engage in a four-month program designed to accelerate your research in AI safety. You will primarily use external infrastructure, such as open-source models and public APIs, to work on an empirical project that aligns with Anthropic's research priorities. You will receive direct mentorship from Anthropic researchers, who will guide you through the project selection and mentor matching process. You will have access to a shared workspace in either Berkeley, California, or London, UK, where you can collaborate with fellow researchers. Throughout the fellowship, you will connect with the broader AI safety research community, enhancing your understanding and network in the field. You will be expected to produce a public output, such as a paper submission, showcasing your research findings. The fellowship also provides a weekly stipend and funding for research expenses, allowing you to focus on your project without financial constraints.
What we offer
Anthropic offers a supportive environment for AI safety research, providing funding and mentorship to promising technical talent. You will have access to a shared workspace and the opportunity to connect with experienced researchers and the broader AI safety community. The fellowship includes a weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, along with access to benefits that vary by country. You will also receive funding for compute resources and other research expenses, ensuring you have the necessary tools to succeed in your project. Anthropic values collaboration and offers flexible working hours, allowing you to balance your research with personal commitments. You will be part of a mission-driven organization focused on creating safe and beneficial AI systems for society.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Anthropic.
Similar Jobs You Might Like
Based on your interests and this role

Ai Research Engineer
Anthropic is seeking an AI Security Fellow to accelerate AI security and safety research. You'll work on empirical projects aligned with research priorities, utilizing external infrastructure. This role is suitable for promising technical talent, regardless of previous experience.

Ai Research Engineer
Faculty is seeking a Lead AI Research Engineer to drive research and development in AI safety. You'll work on critical evaluations and mitigation strategies in sensitive areas. This role requires a strong background in AI and research methodologies.

Software Engineering
OpenAI is hiring a Software Engineer for their AI Safety team to build trust and safety capabilities in AI models. You'll work with Python and AI techniques to ensure responsible deployment of advanced models. This position requires experience in software engineering and a strong understanding of AI safety.

Red Team Engineer
Anthropic is hiring a Staff Red Team Engineer to ensure the safety of AI systems by uncovering vulnerabilities through adversarial testing. You'll work with AWS and Python to simulate sophisticated threat actors. This role requires experience in security engineering and a strong understanding of AI systems.

Ai Research Engineer
Faculty is hiring a Senior Manager (AI Safety) to lead initiatives in responsible AI for government and public services. You'll work on impactful projects that leverage AI for the public good. This role requires experience in AI and a commitment to ethical technology.