Databricks

About Databricks

Empowering data teams with unified analytics

🏒 TechπŸ‘₯ 1K-5KπŸ“… Founded 2013πŸ“ San Francisco, California, United States

Key Highlights

  • Headquartered in San Francisco, CA
  • Valuation of $43 billion with $3.5 billion raised
  • Serves over 7,000 customers including Comcast and Shell
  • Utilizes Apache Spark for big data processing

Databricks, headquartered in San Francisco, California, is a unified data analytics platform that simplifies data engineering and collaborative data science. Trusted by over 7,000 organizations, including Fortune 500 companies like Comcast and Shell, Databricks has raised $3.5 billion in funding, ac...

🎁 Benefits

Databricks offers competitive salaries, equity options, generous PTO policies, and a remote-friendly work environment. Employees also benefit from a l...

🌟 Culture

Databricks fosters a culture of innovation with a strong emphasis on data-driven decision-making. The company values collaboration across teams and en...

Databricks

Software Engineering

Databricks β€’ San Francisco - On-Site

Posted 1d agoπŸ›οΈ On-SiteSoftware EngineeringπŸ“ San FranciscoπŸ’° $142,200 - $204,600 / yearly
Apply Now β†’

Overview

Databricks is hiring a Software Engineer for GenAI inference to design and optimize the inference engine for their Foundation Model API. You'll work with technologies like Python, Java, and TensorFlow in San Francisco.

Job Description

Who you are

You have a strong background in software engineering with experience in designing and optimizing inference engines. Your expertise in Python and Java allows you to contribute effectively to large-scale systems, particularly in the context of machine learning and AI. You understand the intricacies of working with large language models (LLMs) and are familiar with the challenges of optimizing for latency and throughput.

You thrive in collaborative environments, working closely with researchers and cross-functional teams to integrate new model architectures and features into production systems. Your problem-solving skills enable you to identify bottlenecks and implement effective solutions, ensuring the reliability and efficiency of inference pipelines.

What you'll do

In this role, you will contribute to the design and implementation of the inference engine that powers Databricks’ Foundation Model API. You will collaborate with researchers to bring innovative model architectures into the engine, optimizing for performance across GPUs and accelerators. Your responsibilities will include building and maintaining instrumentation and profiling tools to guide optimizations, as well as developing scalable routing and memory management mechanisms for inference workloads.

You will also support the reliability and fault tolerance of inference pipelines, implementing strategies for A/B launches and model versioning. Your work will involve orchestrating distributed inference infrastructure, balancing load, and managing communication overhead across nodes. You will document and share your learnings, contributing to internal best practices and open-source efforts.

What we offer

At Databricks, you will be part of a team that is at the forefront of AI and machine learning technology. We offer a collaborative work environment where innovation is encouraged, and your contributions will have a direct impact on our products and services. You will have opportunities for professional growth and development, working alongside some of the brightest minds in the industry.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Databricks.

✨

Similar Jobs You Might Like

Based on your interests and this role

Databricks

Staff Engineer

Databricksβ€’πŸ“ San Francisco - On-Site

Databricks is hiring a Staff Software Engineer for GenAI inference to lead the architecture and optimization of their inference engine. You'll work with technologies like Python and TensorFlow to ensure high throughput and low latency. This position requires significant experience in machine learning and AI.

πŸ›οΈ On-SiteLead
1d ago
OpenAI

Software Engineering

OpenAIβ€’πŸ“ San Francisco - On-Site

OpenAI is hiring a Software Engineer for their Model Inference team to optimize AI models for high-volume production environments. You'll work with Azure and Python to enhance model performance and efficiency. This position requires 5+ years of experience in software engineering.

πŸ›οΈ On-SiteMid-Level
1 year ago
Databricks

Staff Engineer

Databricksβ€’πŸ“ San Francisco - On-Site

Databricks is hiring a Staff Software Engineer for GenAI Performance and Kernel to lead the design and optimization of high-performance GPU kernels. You'll work closely with ML researchers and systems engineers to enhance inference performance. This role requires expertise in performance engineering and GPU optimization.

πŸ›οΈ On-SiteLead
1d ago
OpenAI

Technical Lead

OpenAIβ€’πŸ“ San Francisco

OpenAI is hiring a Technical Lead for the Sora team to optimize model serving efficiency and enhance inference performance. You'll work closely with research and product teams, leveraging your expertise in GPU and kernel-level systems.

Lead
10 months ago
Lassie

Software Engineering

Lassieβ€’πŸ“ San Francisco - On-Site

Lassie is hiring a Senior Software Engineer to build AI agents for healthcare practices. You'll work with Python, Django, and various cloud technologies to create reliable production systems. This role requires strong experience in full-stack development.

πŸ›οΈ On-SiteSenior
2 months ago