
About Anthropic
Building safe and reliable AI systems for everyone
Key Highlights
- Headquartered in SoMa, San Francisco, CA
- Raised $29.3 billion in funding, including $13 billion Series F
- Over 1,000 employees focused on AI safety and research
- Launched Claude, an AI chat assistant rivaling ChatGPT
Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...
🎁 Benefits
Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...
🌟 Culture
Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...
Overview
Anthropic is hiring a Lead AI Research Engineer to design and implement their evaluation platform for AI models. You'll work at the intersection of research and engineering to enhance model capabilities and safety. This role requires strong technical leadership and collaboration skills.
Job Description
Who you are
You have a strong background in AI and machine learning, with experience in designing evaluation methodologies that assess model capabilities across various domains. Your technical expertise allows you to lead the architecture of evaluation platforms that scale with evolving research needs. You are comfortable collaborating with cross-functional teams, including training, alignment, and safety researchers, to ensure models meet high standards before deployment.
You possess excellent problem-solving skills and a strategic mindset, enabling you to drive the vision and implementation of evaluation systems. Your ability to communicate complex ideas clearly helps you work effectively with diverse teams, fostering a collaborative environment focused on building safe and beneficial AI systems.
What you'll do
In this role, you will lead the design and implementation of Anthropic's evaluation platform, shaping how the company understands and improves its AI models. You will develop novel evaluation methodologies that assess capabilities such as reasoning, safety, helpfulness, and harmlessness. Your work will directly influence training decisions and the model development roadmap, making it essential to the company's mission.
You will collaborate closely with various teams to ensure that the evaluation infrastructure is robust and scalable, adapting to the rapidly evolving landscape of AI capabilities. Your leadership will guide the strategic vision for the evaluation systems, ensuring they align with Anthropic's goals of creating reliable and interpretable AI.
What we offer
Anthropic provides competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. You will have the opportunity to work in a collaborative office space in San Francisco, contributing to a mission-driven organization focused on building beneficial AI systems. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Anthropic.
Similar Jobs You Might Like
Based on your interests and this role

Ai Research Engineer
Cohere is hiring a Senior Research Engineer, Model Evaluation to develop next-generation evaluation methods for AI models. You'll work with Python, TensorFlow, and PyTorch to enhance model capabilities. This role requires expertise in machine learning and data analysis.

Research Scientist
Cohere is hiring a Senior Research Scientist, Model Evaluation to create next-generation evaluation methods for AI models. You'll work on ambitious benchmarks and collaborate with cross-functional teams. This role requires expertise in model evaluation and AI systems.

Ai Research Engineer
Cohere is hiring a Staff Research Engineer to enhance model efficiency for AI systems. You'll work on optimizing large language models and improving inference efficiency. This position requires expertise in machine learning and performance optimization.

Ai Research Engineer
Mercor is seeking a Mid-Level AI Research Engineer to contribute to AI development through post-training experiments and synthetic data generation. You'll work with Python and machine learning frameworks like TensorFlow and PyTorch in San Francisco.

Ai Research Engineer
OpenAI is hiring an AI Research Engineer to develop ambitious environments for measuring and steering AI models. You'll work with statistical analysis and reinforcement learning techniques in San Francisco.