
About Zscaler
Secure access for a cloud-first world
Key Highlights
- Public company (NASDAQ: ZS) with a valuation over $4B
- 7,000+ enterprise customers including Netflix & Siemens
- Headquartered in San Jose, California
- Over $500M raised in funding since inception
Zscaler, headquartered in San Jose, California, is a leader in cloud security solutions, providing services to over 7,000 customers including major corporations like Netflix and Siemens. Founded in 2008, Zscaler went public in 2018 and has since raised over $500 million in funding, with a current va...
🎁 Benefits
Zscaler offers competitive salaries, equity options, generous PTO policies, and a flexible remote work policy to support work-life balance. Employees ...
🌟 Culture
Zscaler fosters a culture of innovation and agility, emphasizing a cloud-first approach to security. The company values transparency and collaboration...
Overview
Zscaler is hiring a Senior Staff Data Engineer to design, build, and maintain data systems that support their zero trust security platform. This role is hybrid based in Israel and requires expertise in data engineering practices.
Job Description
Who you are
You have extensive experience in data engineering, with a strong background in designing and implementing data pipelines that support large-scale applications. You understand the complexities of data management and are skilled in optimizing data workflows to ensure efficiency and reliability.
You are proficient in various data technologies and have a solid understanding of data architecture principles. Your ability to collaborate with cross-functional teams allows you to translate business requirements into technical solutions effectively.
What you'll do
In this role, you will be responsible for designing and building robust data systems that enable Zscaler to deliver its zero trust security solutions. You will work closely with engineering teams to ensure that data is accessible, reliable, and secure, playing a key role in the company's data strategy.
You will also be involved in maintaining and optimizing existing data pipelines, ensuring they meet the evolving needs of the business. Your expertise will help drive data-driven decision-making across the organization, contributing to Zscaler's mission of enhancing cybersecurity for its clients.
What we offer
Zscaler provides a dynamic work environment where you can make a significant impact. You will have the opportunity to work with cutting-edge technologies and be part of a team that values innovation and collaboration. We encourage you to apply even if your experience doesn't match every requirement, as we believe in the potential of our people to grow and succeed.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Zscaler.
Similar Jobs You Might Like
Based on your interests and this role

Data Engineer
OKX is hiring a Senior Staff Data Engineer to lead data engineering efforts, focusing on data ingestion, ETL processes, and business intelligence. You'll work with technologies that empower evidence-based decision-making in Singapore.

Data Engineer
Nebius AI is hiring a Senior Data Engineer to design and maintain production-grade data pipelines. You'll work with Python and SQL to ensure reliable data flows for analytics and machine learning. This role requires hands-on experience in data engineering.

Data Engineer
ControlUp is hiring a Senior Big-Data Engineer to design and build scalable data infrastructure and processing pipelines. You'll work with technologies like Apache Spark and AWS to deliver real-time insights. This role requires significant experience in data engineering.

Data Engineer
Coupang is hiring a Staff Data Engineer to architect and develop data ingestion systems, data lakes, and data warehouses. You'll work with technologies like AWS, Apache Spark, and SQL to enable data-driven decision-making across the organization.

Data Engineer
Cloudbeds is hiring a Staff Data Engineer to design and implement large-scale distributed data processing systems. You'll work with technologies like Apache Hadoop, Spark, and Flink to build robust data pipelines. This role requires expertise in distributed computing and containerization.