Anthropic

About Anthropic

Building safe and reliable AI systems for everyone

🏢 Tech👥 1001+ employees📅 Founded 2021📍 SoMa, San Francisco, CA💰 $29.3b4.5
B2BArtificial IntelligenceDeep TechMachine LearningSaaS

Key Highlights

  • Headquartered in SoMa, San Francisco, CA
  • Raised $29.3 billion in funding, including $13 billion Series F
  • Over 1,000 employees focused on AI safety and research
  • Launched Claude, an AI chat assistant rivaling ChatGPT

Anthropic, headquartered in SoMa, San Francisco, is an AI safety and research company focused on developing reliable, interpretable, and steerable AI systems. With over 1,000 employees and backed by Google, Anthropic has raised $29.3 billion in funding, including a monumental Series F round of $13 b...

🎁 Benefits

Anthropic offers comprehensive health, dental, and vision insurance for employees and their dependents, along with inclusive fertility benefits via Ca...

🌟 Culture

Anthropic's culture is rooted in AI safety and reliability, with a focus on producing less harmful outputs compared to existing AI systems. The compan...

Anthropic

Ai Research Engineer Mid-Level

AnthropicSan Francisco - On-Site

Posted 7h ago🏛️ On-SiteMid-LevelAi Research Engineer📍 San Francisco💰 $320,000 - $485,000 / yearly
Apply Now →

Skills & Technologies

Overview

Anthropic is hiring a Privacy Research Engineer to design and implement privacy-preserving techniques for AI systems. You'll work with Python and ML frameworks like PyTorch and JAX in San Francisco. This position requires experience in privacy-preserving machine learning.

Job Description

Who you are

You have experience working on privacy-preserving machine learning, demonstrating a strong understanding of the challenges and techniques involved in this field. Your track record includes shipping products and features in fast-moving environments, showcasing your ability to adapt and deliver under pressure. You possess strong coding skills in Python and are familiar with machine learning frameworks like PyTorch or JAX, enabling you to contribute effectively to the development of privacy-first training algorithms. Your deep familiarity with large language models equips you with the knowledge necessary to understand their workings and training processes, which is crucial for this role. You have experience working with privacy-preserving techniques, such as differential privacy, which will be essential in auditing and enhancing our current methodologies.

What you'll do

In this role, you will lead the privacy analysis of frontier models, carefully auditing the use of data to ensure safety throughout the process. You will develop privacy-first training algorithms and techniques, contributing to the creation of robust AI systems that prioritize user privacy. Your responsibilities will also include developing evaluation and auditing techniques to measure the privacy of training algorithms, ensuring that our practices align with the highest standards of data protection. Working with a small, senior team of engineers and researchers, you will enact a forward-looking privacy policy that advocates for responsible data handling on behalf of our users. Your role will involve collaboration with various stakeholders to ensure that privacy considerations are integrated into all aspects of our AI development processes.

What we offer

At Anthropic, we provide competitive compensation and benefits, including optional equity donation matching, generous vacation and parental leave, and flexible working hours. Our office in San Francisco offers a collaborative environment where you can work closely with colleagues who share your commitment to building beneficial AI systems. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and backgrounds in our team.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Anthropic.

Similar Jobs You Might Like

Based on your interests and this role

OpenAI

Ai Research Engineer

OpenAI📍 San Francisco - On-Site

OpenAI is hiring a Research Engineer in Privacy to safeguard user data while enhancing AI systems. You'll work with differential privacy, federated learning, and other privacy-preserving techniques. This role requires expertise in privacy-enhancing technologies.

🏛️ On-SiteMid-Level
2 months ago
Anthropic

Machine Learning Engineer

Anthropic📍 San Francisco

Anthropic is hiring ML/Research Engineers to develop systems that detect and mitigate misuse of AI technologies. You'll work with Python and machine learning frameworks like TensorFlow and PyTorch. This role requires experience in building classifiers and monitoring systems for AI safety.

Mid-Level
6h ago
Anthropic

Applied Scientist

Anthropic📍 San Francisco

Anthropic is seeking an Applied Safety Research Engineer to develop methods for evaluating AI safety. You'll work with machine learning and Python to design experiments that improve model evaluations. This role requires a research-oriented mindset and experience in applied ML.

Mid-Level
6h ago
Anthropic

Ai Research Engineer

Anthropic📍 San Francisco

Anthropic is hiring a Research Engineer for their Cybersecurity RL team to advance AI capabilities in secure coding and vulnerability remediation. You'll work with Python and reinforcement learning techniques in either San Francisco or New York.

Mid-Level
6h ago
Anthropic

Software Engineering

Anthropic📍 San Francisco

Anthropic is seeking Software Engineers for their Safeguards team to develop safety mechanisms for AI systems. You'll work with Java and Python to build monitoring systems and abuse detection infrastructure. This role requires 5-10 years of experience in software engineering.

Mid-Level
6h ago