Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏢 Tech👥 1001+ employees📅 Founded 2021📍 SoMa, San Francisco, CA💰 $29.3b4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Ai Research Engineer Mid-Level

AnthropicLondon

Posted 8h agoMid-LevelAi Research Engineer📍 London💰 £260,000 - £370,000 / yearly
Apply Now →

Overview

Anthropic is hiring a Research Engineer/Scientist in Alignment Science to conduct machine learning experiments focused on AI safety. You'll work on AI control and alignment stress-testing in London.

Job Description

Who you are

You are a versatile professional who embodies both the roles of a scientist and an engineer, with a strong commitment to making AI systems safe, interpretable, and beneficial for society. You have a keen interest in exploring the challenges associated with human-level AI capabilities and are eager to contribute to exploratory research in AI safety. Your background includes experience in building and running machine learning experiments, and you are passionate about understanding and steering the behavior of powerful AI systems. You thrive in collaborative environments, working alongside researchers and engineers to tackle complex problems in alignment science.

You possess a solid understanding of AI control methods, focusing on ensuring that advanced AI systems remain safe and harmless, even in unfamiliar or adversarial scenarios. Your expertise extends to alignment stress-testing, where you create model organisms of misalignment to enhance empirical understanding of potential alignment failures. You are driven by a mission to create reliable AI systems and are excited about the opportunity to contribute to a team that prioritizes safety and ethical considerations in AI development.

What you'll do

In this role, you will be responsible for designing and executing thorough machine learning experiments that contribute to the understanding of AI alignment and safety. You will collaborate with interdisciplinary teams, including those focused on interpretability and fine-tuning, to explore innovative approaches to AI control. Your work will involve developing methods to assess and mitigate risks associated with powerful AI systems, ensuring that they operate safely in various contexts. You will also engage in research that informs the broader AI community about alignment challenges and solutions, contributing to Anthropic's mission of building beneficial AI systems.

What we offer

At Anthropic, you will be part of a rapidly growing team dedicated to advancing AI safety and alignment. We offer competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. Our office in London provides a collaborative environment where you can engage with colleagues who share your commitment to creating safe and interpretable AI systems. Join us in our mission to ensure that AI technology is developed responsibly and ethically, making a positive impact on society.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

Similar Jobs You Might Like

Based on your interests and this role

Anthropic

Ai Research Engineer

Anthropic📍 San Francisco - On-Site

Anthropic is hiring a Research Engineer / Scientist in Alignment Science to build and run machine learning experiments focused on AI safety. You'll work with technologies like Python and contribute to exploratory research on powerful AI systems. This role requires a blend of scientific and engineering skills.

🏛️ On-SiteMid-Level
7h ago
OpenAI

Ai Research Engineer

OpenAI📍 San Francisco - On-Site

OpenAI is hiring a Researcher in Alignment to ensure AI systems follow human intent in complex scenarios. You'll focus on designing scalable solutions for AI alignment. This role is based in San Francisco.

🏛️ On-SiteMid-Level
1 year ago
Anthropic

Ai Research Engineer

Anthropic📍 San Francisco

Anthropic is seeking a Research Scientist/Engineer for their Alignment Finetuning team to develop techniques for training language models aligned with human values. You'll work with Python to implement novel finetuning techniques and improve model behavior.

7h ago
Anthropic

Ai Research Engineer

Anthropic📍 London

Anthropic is hiring a Research Engineer for their ML Performance and Scaling team to ensure reliable and efficient training of AI models. You'll work with Python and focus on performance optimization and experimental design. This role requires deep technical expertise in large-scale ML systems.

Mid-Level
7h ago
Anthropic

Ai Research Engineer

Anthropic📍 London

Anthropic is hiring a Research Engineer for their Production Model Post-Training team to enhance AI capabilities and safety. You'll implement and optimize post-training techniques using Python and other methodologies. This role requires experience in AI research and engineering.

Mid-Level
7h ago