42dot

About 42dot

Revolutionizing urban transportation with UMOS

🏢 Tech👥 51-250📅 Founded 2019📍 Seoul, Korea, South

Key Highlights

  • Developing the Urban Mobility Operating System (UMOS)
  • Based in Seoul, South Korea, with a team of 51-250
  • Focused on transitioning to autonomous transportation solutions
  • Aims to enhance urban mobility and reduce congestion

42dot is a technology company based in Seoul, South Korea, focused on transforming urban transportation with its Urban Mobility Operating System (UMOS). This cloud-based platform aims to streamline mobility services and facilitate the transition to autonomous vehicles. With a growing team of 51-250 ...

🎁 Benefits

Employees enjoy competitive salaries, stock options, flexible working hours, and a generous PTO policy. The company also offers remote work options an...

🌟 Culture

42dot fosters a culture of innovation and agility, encouraging employees to contribute ideas that drive the future of transportation. The company valu...

Overview

42dot is seeking a Senior AI Data Pipeline Engineer to architect and scale global data pipelines for AI workloads. You'll work with technologies like Apache Spark and Databricks to optimize large-scale data processing. This role requires extensive experience in building production-grade data pipelines.

Job Description

Who you are

You have extensive professional experience in building and operating production-grade data pipelines for massive-scale AI/ML datasets — you've designed and implemented high-performance systems that handle petabyte-scale data efficiently. Your strong proficiency in distributed processing frameworks, particularly Apache Spark and the Databricks ecosystem, allows you to optimize data workflows effectively.

You possess deep hands-on experience with workflow orchestration tools like Apache Airflow — managing complex dependency graphs is second nature to you. Your solid understanding of Kubernetes and containerization enables you to deploy and scale data environments robustly, ensuring reliable execution of data workloads.

What you'll do

In this role, you will design and build high-performance, scalable data pipelines to support diverse AI and Machine Learning initiatives across the organization. You will architect and implement multi-region data infrastructure to ensure global data availability and seamless synchronization. Your responsibilities will include developing flexible pipeline architectures that allow for complex branching and logic isolation to support multiple concurrent AI projects.

You will optimize large-scale data processing workloads using Databricks and Spark to maximize throughput and minimize processing costs. Collaborating with AI researchers and platform teams, you will streamline the flow of high-quality data into training and evaluation pipelines, ensuring that the data infrastructure meets the needs of various AI initiatives.

What we offer

At 42dot, you will be part of a dynamic team that is at the forefront of AI technology. We provide a collaborative environment where your contributions will directly impact our mission-critical AI workloads. You will have opportunities for professional growth and development, working alongside experts in the field. We encourage you to apply even if your experience doesn't match every requirement, as we value diverse perspectives and skills.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at 42dot.

Similar Jobs You Might Like

Based on your interests and this role

42dot

Ai Engineer

42dot📍 Pangyo - On-Site

42dot is hiring an AI Infrastructure Engineer to manage high-performance AI infrastructure orchestrating thousands of GPUs. You'll work with technologies like Kubernetes and Python to optimize operational efficiency. This position requires strong proficiency in Linux and experience with containerization technologies.

🏛️ On-SiteMid-Level
1w ago
Demandbase

Data Engineer

Demandbase📍 United States - Remote

Demandbase is hiring a Senior Data Engineer to enhance their account-based GTM platform. You'll work on data consolidation and AI-powered insights to support B2B enterprises. This position requires significant experience in data engineering.

🏠 RemoteSenior
3 months ago
Lyft

Software Engineering

Lyft📍 Toronto

Lyft is hiring a Senior Software Engineer for their Data Pipelines team to build and maintain critical data infrastructure. You'll work with technologies like Python, Apache, and Kafka to support millions of users. This role requires 5+ years of experience in software engineering.

Senior
2w ago
Reddit

Engineering Manager

Reddit📍 United States - Remote

Reddit is seeking an Engineering Manager for their Data Pipeline team to lead the design and evolution of critical data infrastructure. You'll work with AWS, GCP, and Kubernetes to manage petabytes of data. This role requires experience in managing high-performing engineering teams and building distributed systems at web scale.

🏠 RemoteLead
1w ago
Schonfeld

Ai Data Engineer

Schonfeld📍 New York - On-Site

Schonfeld is seeking a Senior AI Data Engineer to design and maintain data pipelines for their internal AI platform, SchonAI. You'll work with technologies like Prefect to ensure high-quality data flows to support investment professionals. This role requires experience in data engineering and AI infrastructure.

🏛️ On-SiteSenior
1w ago