
About Databricks
Empowering data teams with unified analytics
Key Highlights
- Headquartered in San Francisco, CA
- Valuation of $43 billion with $3.5 billion raised
- Serves over 7,000 customers including Comcast and Shell
- Utilizes Apache Spark for big data processing
Databricks, headquartered in San Francisco, California, is a unified data analytics platform that simplifies data engineering and collaborative data science. Trusted by over 7,000 organizations, including Fortune 500 companies like Comcast and Shell, Databricks has raised $3.5 billion in funding, ac...
🎁 Benefits
Databricks offers competitive salaries, equity options, generous PTO policies, and a remote-friendly work environment. Employees also benefit from a l...
🌟 Culture
Databricks fosters a culture of innovation with a strong emphasis on data-driven decision-making. The company values collaboration across teams and en...
Overview
Databricks is hiring a Senior Data Engineer to drive projects focused on data analysis and infrastructure. You'll work with Java, Scala, Go, Python, and SQL to design and implement cost-attribution models and systems. This position requires 5+ years of experience in data engineering or data science.
Job Description
Who you are
You have 5+ years of production-level experience in software engineering with a focus on data engineering. Your expertise spans languages such as Java, Scala, Go, Python, and SQL, allowing you to tackle complex data challenges effectively. You possess a strong understanding of large-scale distributed systems and have experience with cloud technologies like AWS, Azure, and GCP. Your background includes developing foundational data models and telemetry pipelines that enhance system reliability and efficiency.
You are well-versed in designing and implementing cost-attribution models and systems, combining your data engineering skills with a deep technical understanding of scheduling platforms. You thrive in environments that require flexibility and efficiency, ensuring safety across diverse workloads. Your ability to monitor, forecast, and proactively manage capacity demand across systems sets you apart as a leader in your field.
What you'll do
In this role, you will collaborate with your team to drive projects centered around data analysis and infrastructure for critical business domains. You will design and implement systems that generalize capacity management, ensuring operational efficiency and safety. Your work will involve building systems that automatically maintain the right balance between these two aspects, providing end-to-end visibility and actionable insights for other teams.
You will be responsible for developing foundational data models, metrics, and telemetry pipelines that track product-wide system reliability and efficiency. Your contributions will empower teams across the organization to make data-driven decisions. You will also engage in continuous improvement of existing systems, leveraging your expertise to enhance performance and scalability.
What we offer
At Databricks, you will be part of a dynamic team that values innovation and collaboration. We offer a competitive salary and benefits package, along with opportunities for professional growth and development. You will work in a supportive environment that encourages you to share your ideas and contribute to impactful projects. Join us in shaping the future of data and AI, and make a difference in how organizations leverage data for success.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Databricks.
Similar Jobs You Might Like
Based on your interests and this role

Data Engineer
Catawiki is seeking a Senior Data Engineer to build and maintain a robust data ecosystem that supports their global marketplace. You'll work with technologies like Airflow, AWS, and PostgreSQL to ensure data quality and scalability. This role requires strong experience in data engineering and collaboration with cross-functional teams.

Data Engineer
LiveIntent is hiring a Senior Data Engineer to develop and deploy next-generation technology in Copenhagen. You'll work with Scala, Apache Spark, and Hadoop to manage large-scale data processing. This position requires experience in data engineering and a strong interest in programming.

Data Engineer
Apple is hiring a Senior Data Engineer to design and maintain data pipelines that drive capacity and cost decisions across its infrastructure. You'll work with technologies like AWS, Apache, and Airflow, requiring strong skills in Python and SQL.

Data Engineer
Storable is seeking a Senior Data Engineer to enhance data quality and manage scalable data pipelines. You'll work with technologies like Apache Airflow and Apache Spark to drive data operations. This role requires significant experience in data engineering and ETL development.