
About Anthropic
Building safe and reliable AI systems for everyone
Key Highlights
- Headquartered in SoMa, San Francisco, CA
- Raised $29.3 billion in funding, including $13 billion Series F
- Over 1,000 employees focused on AI safety and research
- Launched Claude, an AI chat assistant rivaling ChatGPT
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
🎁 Benefits
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
🌟 Culture
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...
Overview
Anthropic is seeking an AI Security Fellow to accelerate AI security and safety research. You'll work on empirical projects aligned with research priorities, utilizing external infrastructure. This role is suitable for promising technical talent, regardless of previous experience.
Job Description
Who you are
You are a motivated individual interested in AI security and safety research. You may have a background in computer science, engineering, or a related field, but what matters most is your enthusiasm for exploring the intersection of AI and cybersecurity. You are eager to learn and grow in a collaborative environment, and you are open to mentorship and guidance as you embark on this journey. You understand the importance of creating reliable and interpretable AI systems that can benefit society.
Desirable
While previous experience in AI or cybersecurity is not required, familiarity with programming languages and a basic understanding of machine learning concepts will be beneficial. You are curious about how AI can be applied to real-world problems and are excited about the potential of AI to enhance cybersecurity measures. You are a team player who values collaboration and is ready to contribute to a mission-driven organization.
What you'll do
As an AI Security Fellow at Anthropic, you will engage in research projects that focus on the defensive use of AI in cybersecurity. You will primarily utilize external infrastructure, such as open-source models and public APIs, to conduct empirical research aligned with Anthropic's research priorities. Your work will aim to produce a public output, such as a paper submission, showcasing your findings and contributions to the field. You will collaborate with a diverse team of researchers and engineers, gaining insights and feedback to refine your projects. Throughout your fellowship, you will receive direct mentorship from experienced professionals, helping you navigate the complexities of AI security research and develop your skills further.
What we offer
Anthropic provides a supportive environment for research and innovation. You will have access to funding and mentorship, allowing you to focus on your research without the pressure of previous experience. The fellowship program runs for four months, with multiple cohorts each year, providing opportunities for continuous learning and growth. You will be part of a mission-driven organization that values the safe and beneficial use of AI, contributing to projects that have a meaningful impact on society. Additionally, you will enjoy competitive compensation and benefits, flexible working hours, and the chance to collaborate with a talented team in a lovely office space or remotely.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Anthropic.
Similar Jobs You Might Like
Based on your interests and this role

Ai Research Engineer
Anthropic is seeking an AI Safety Fellow to accelerate AI safety research and foster talent. You'll work on empirical projects aligned with AI safety priorities. This role is open to promising technical talent regardless of previous experience.

Security Engineer
Sentient is seeking a Security Research Fellow to work at the intersection of cryptography, hardware security, and open-source AI. You'll gain hands-on engineering experience and contribute to the Sentient Enclaves Framework. This role is ideal for students excited about confidential computing.

Applied Scientist
Anthropic is hiring an Applied AI Security Architect to serve as a trusted security expert for enterprise customers. You'll engage with security professionals to address critical questions about deploying AI systems safely. This role requires significant experience in enterprise security and cloud architecture.

Ai Security Researcher
Agoda is seeking an AI Security Researcher to focus on securing AI systems and implementing robust security solutions. You'll work with modern LLMs and Generative AI technologies, employing offensive security techniques. This role requires a deep understanding of AI security challenges.

Ai Security Engineer
Amplitude is hiring an AI Security Engineer to strengthen enterprise security controls for AI tools and workflows. You'll focus on access controls, permissions, and data handling in corporate systems. This role requires experience in IT security and a strong understanding of AI risks.