
About Anthropic
Building safe and reliable AI systems for everyone
Key Highlights
- Headquartered in SoMa, San Francisco, CA
- Raised $29.3 billion in funding, including $13 billion Series F
- Over 1,000 employees focused on AI safety and research
- Launched Claude, an AI chat assistant rivaling ChatGPT
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
🎁 Benefits
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
🌟 Culture
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...
Overview
Anthropic is seeking a Product Policy Manager to assess product launches for safety risks and collaborate with cross-functional teams. This role requires strong policy development skills and a commitment to safe AI practices.
Job Description
Who you are
You have a strong background in policy management, ideally with experience in technology or AI. You understand the importance of balancing innovation with responsibility, and you are skilled at developing frameworks to assess risks associated with new product features. You are comfortable working with cross-functional teams, including product, legal, and engineering, to ensure that safety considerations are integrated into the product development process. You are detail-oriented and capable of conducting comprehensive safety reviews that cover both technical and non-technical harms. You are also prepared to engage with sensitive content and navigate complex discussions around product safety.
What you'll do
As the Product Policy Manager, you will lead the Safeguards team in assessing product launches for safety risks. You will develop and maintain risk assessment frameworks that help identify and evaluate potential safety risks associated with new features. Your role will involve conducting thorough product safety reviews and collaborating with various stakeholders to drive appropriate safety mitigations. You will work closely with product teams to understand upcoming features and anticipate potential misuses or unintended consequences. Your insights will be crucial in crafting policies that support Anthropic's mission of creating safe and beneficial AI systems. You will also engage in ongoing discussions about product safety and contribute to the development of best practices in the field.
What we offer
Anthropic offers a competitive compensation package, including benefits and optional equity donation matching. You will enjoy generous vacation and parental leave, flexible working hours, and a collaborative office environment in San Francisco. We are committed to fostering a culture that values safety and responsibility in AI development, and we encourage you to apply even if your experience doesn't match every requirement. Join us in our mission to create reliable and interpretable AI systems that benefit society.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Anthropic.
Similar Jobs You Might Like
Based on your interests and this role

Product Manager
OpenAI is hiring a Product Policy Manager focused on Child Safety to develop and implement policies that govern the use of AI technologies. This role requires expertise in child safety and AI policy, working closely with product and legal teams.

Product Manager
Pinterest is seeking a Lead Product Policy Manager to develop industry-leading content policies, particularly around generative AI. You'll collaborate with cross-functional teams to ensure a safe environment for users. This role requires significant experience in policy management.

Program Manager
Meta is seeking a Product Risk Program Manager to oversee complex risk reviews and ensure compliant product launches. You'll collaborate with product and engineering teams while optimizing processes. This role requires program management expertise and systems thinking.

Product Manager
Meta is hiring a Product Risk Program Manager for their Product Enablement team to manage complex risk reviews and ensure compliant product launches. You'll collaborate with product and engineering teams while optimizing processes. This role requires program management expertise.

Product Manager
Binance is hiring a Product Manager for their Risk team to own the strategy and execution of risk-related products. You'll collaborate with various teams to design data-driven solutions that enhance user safety. This role requires experience in product management and a strong understanding of risk mitigation.