
About Anthropic
Building safe and reliable AI systems for everyone
Key Highlights
- Headquartered in SoMa, San Francisco, CA
- Raised $29.3 billion in funding, including $13 billion Series F
- Over 1,000 employees focused on AI safety and research
- Launched Claude, an AI chat assistant rivaling ChatGPT
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
🎁 Benefits
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
🌟 Culture
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...
Overview
Anthropic is hiring a National Security Policy Lead to guide policy approaches to national security challenges involving AI. You'll collaborate with various teams to develop strategies that ensure the security of U.S. and allied democracies. This role requires expertise in national security and policy development.
Job Description
Who you are
You have a strong background in national security policy, with experience in developing and leading engagements on policy approaches related to AI. You understand the complexities of national security challenges and are committed to ensuring that AI systems are safe and beneficial for society. Your ability to collaborate with diverse teams, including legal, trust and safety, product, and research, is essential for success in this role. You are a high-agency individual who thrives in a dynamic environment and is passionate about the intersection of technology and national security.
What you'll do
In this role, you will design policy proposals to address national security challenges related to AI and lead associated policy engagements. You will shape Anthropic's own policies and approaches to mitigating national security risks involving its products. Collaborating with national security partners across public and private sectors, you will develop strategies for AI that safeguard the geopolitical strength and competitiveness of the United States and allied democracies. Your work will ensure that Anthropic supports the security of democracies and promotes the responsible adoption of AI for defense and intelligence purposes.
What we offer
Anthropic offers competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. You will have the opportunity to work in a lovely office space in San Francisco, collaborating with a committed team of researchers, engineers, policy experts, and business leaders dedicated to building beneficial AI systems.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Anthropic.
Similar Jobs You Might Like
Based on your interests and this role

Policy Manager
Anthropic is hiring a Manager for National Security Policy to oversee the daily operations of the National Security Policy team. This role requires a deep understanding of the national security and policy landscape.

Strategic Project Lead
Mercor is hiring a Strategic Project Lead to drive the execution of multi-million-dollar projects and build relationships with top AI researchers. This role requires strong operational excellence and problem-solving skills.

Geopolitics Analyst
Anthropic is hiring a Geopolitics Analyst to analyze international AI developments and inform policy positions. You'll work at the intersection of technology, business, and policy, engaging with national security stakeholders. This role requires strong analytical skills and a deep understanding of the global AI landscape.

Policy Manager
Anthropic is seeking a Policy Manager specializing in Chemical Weapons and High Yield Explosives to design evaluation methodologies and develop strategies for AI safety. This role requires a Ph.D. in Chemistry or related fields and 5-8 years of relevant experience.