
About Yotta Labs
Make saving money fun and rewarding
Key Highlights
- 600,000+ users with $16 million in prizes paid out
- $26.7 million raised in Series A funding
- Located in Union Square, New York, NY
- Innovative savings model that offers prize draws
Yotta is a personal finance app based in Union Square, New York, that aims to transform how Americans save money. By offering a unique savings model that pools interest to provide prize draws instead of traditional interest payments, Yotta has attracted over 600,000 users and paid out $16 million in...
π Benefits
Yotta provides full medical, dental, and vision insurance, unlimited PTO, a 401k program, and an annual company offsite event. Employees enjoy free lu...
π Culture
Yotta's culture is centered around innovation in personal finance, making savings engaging and rewarding. The company values creativity and aims to ad...

Gpu Cloud Platform Engineer β’ Mid-Level
Yotta Labs β’ United States - Remote
Skills & Technologies
Overview
Yotta Labs is hiring a GPU Cloud Platform Engineer to design and operate large-scale GPU infrastructure for AI workloads. You'll work with technologies like Kubernetes and Docker to ensure high availability and performance. This role requires experience in high-performance computing and cloud environments.
Job Description
Who you are
You have a strong background in building and operating large-scale GPU clusters, ensuring stable operation of compute, network, and storage systems. Your experience includes monitoring and troubleshooting online issues, and you are familiar with performance testing and evaluation of multi-node GPU clusters. You are passionate about high-performance systems and distributed orchestration, and you thrive in a flexible, remote work environment that values innovation and autonomy.
You possess a deep understanding of containerized AI workloads and have experience deploying them in Kubernetes-based GPU clusters. Your technical expertise allows you to collaborate effectively with experts from leading institutions and tech companies, bridging the gap between AI and decentralized computing. You are eager to contribute to the development of a Decentralized Operating System (DeOS) for AI workload orchestration at a planetary scale.
What you'll do
In this role, you will design, deploy, and operate large-scale, multi-cluster GPU infrastructure across data centers and cloud environments. You will be responsible for ensuring high availability, performance, and efficiency of containerized AI workloads, ranging from large language models to generative models. Your responsibilities will include conducting performance testing and evaluation of multi-node GPU clusters, as well as monitoring and troubleshooting any issues that arise.
You will collaborate closely with your team to build the next-generation AI compute cloud, contributing to the mission of democratizing access to AI resources by aggregating geo-distributed GPUs. Your work will directly impact the efficiency and scalability of AI development, enabling high-performance computing for a wide spectrum of hardwareβfrom commodity to high-end GPUs.
What we offer
Yotta Labs provides a flexible and remote work environment that encourages innovation and autonomy. You will have the opportunity to work on cutting-edge technology that shapes the future of AI infrastructure. We value collaboration and support your professional growth through continuous learning and development opportunities. Join us in our mission to revolutionize AI workload orchestration and make a significant impact in the field of high-performance computing.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Yotta Labs.
Similar Jobs You Might Like
Based on your interests and this role

Cloud Engineer
Point72 is hiring a Cloud Engineer to design and maintain scalable cloud infrastructure. You'll work with AWS services and Infrastructure as Code using Terraform. This position requires experience in cloud infrastructure design and automation.

Gpu Engineer
Apple is hiring a GPU Engineer to join their Platform Architecture team, focusing on performance analysis and optimization for ML frameworks. You'll work with technologies like CUDA and C++ to enhance GPU performance. This position requires 3+ years of relevant industry experience.

Gpu Engineer
Apple is hiring a GPU Engineer to join the Platform Architecture team, focusing on performance analysis and optimization for ML frameworks. You'll work with technologies like CUDA and C++ in Austin.

Ai Engineer
Reflection is hiring a Member of Technical Staff - GPU Infrastructure to design and operate large-scale GPU infrastructure. You'll work with technologies like CUDA, PyTorch, and Kubernetes to optimize performance and reliability. This position requires deep systems engineering experience in high-performance computing environments.

Staff Engineer
Cohere is hiring a Staff Software Engineer for their GPU Infrastructure team to build and operate superclusters for AI model training. You'll work with technologies like Python, Kubernetes, and AWS. This role requires expertise in high-performance computing (HPC) and cloud infrastructure.