
About Suvoda
Streamlining clinical trials with advanced IRT solutions
Key Highlights
- Headquartered in Conshohocken, PA
- Specializes in SaaS for clinical trial management
- 4-6 week deployment for IRT/IWRS solutions
- Serves numerous biopharmaceutical clients
Suvoda, headquartered in Conshohocken, Pennsylvania, specializes in SaaS solutions for randomization and trial supply management in clinical trials. Their Interactive Response Technology (IRT/IWRS) is utilized by biopharmaceutical companies to streamline processes, boasting a deployment time of 4-6 ...
🎁 Benefits
Suvoda offers competitive salaries, equity options, generous PTO, and a flexible remote work policy to support work-life balance....
🌟 Culture
Suvoda fosters a culture centered around innovation in clinical trial technology, emphasizing collaboration and adaptability to meet client needs in a...
Skills & Technologies
Overview
Suvoda is seeking a Cloud Data Engineer to evolve their data platform towards a data mesh architecture. You'll design and build domain-oriented data products and optimize ETL/ELT pipelines using AWS Glue and PySpark. This position requires at least 4 years of experience in data engineering.
Job Description
Who you are
You have at least 4 years of experience in data engineering, demonstrating ownership of complex data systems and a strong understanding of data mesh principles and decentralized data architecture. Your technical background includes solid experience with AWS data lake technologies such as S3, Glue, Lake Formation, Athena, and Redshift, which you have utilized to build and maintain scalable data solutions.
You are skilled in designing and optimizing ETL/ELT pipelines, particularly using AWS Glue and PySpark, ensuring high-performance data processing across platforms. Your experience also includes implementing AWS DMS pipelines to replicate data into Aurora PostgreSQL for near real-time analytics and reporting. You are committed to supporting data governance, quality, and observability, and you understand the importance of API design best practices.
What you'll do
In this role, you will contribute to the design and implementation of a data mesh architecture, working closely with product, engineering, and analytics teams to deliver robust, reusable data solutions. You will design and build domain-oriented data products that support near real-time reporting, ensuring that the data platform evolves effectively.
You will be responsible for building and maintaining a modern AWS-based data lake, leveraging services like S3, Glue, Lake Formation, Athena, and Redshift to optimize data storage and retrieval. Your role will also involve developing and optimizing ETL/ELT pipelines to support both batch and streaming data workloads, ensuring that the data infrastructure is scalable and efficient.
Collaboration will be key as you work with cross-functional teams to implement automation and CI/CD practices for data infrastructure and pipelines. You will stay current with emerging technologies and industry trends, helping to evolve the platform and enhance its capabilities.
What we offer
Suvoda provides a dynamic work environment where innovation is encouraged, and your contributions will have a direct impact on the company's data strategy. You will have the opportunity to work with cutting-edge technologies and be part of a team that values collaboration and continuous learning. We encourage you to apply even if your experience doesn't match every requirement, as we believe in the potential of diverse backgrounds and perspectives.
Interested in this role?
Apply now or save it for later. Get alerts for similar jobs at Suvoda.
Similar Jobs You Might Like
Based on your interests and this role

Data Engineer
Suvoda is hiring a Data Engineer to evolve their data platform towards a data mesh architecture. You'll design and build domain-oriented data products and optimize ETL/ELT pipelines using AWS technologies. This role requires at least 4 years of experience in data engineering.

Data Engineer
Cribl is hiring a Data Engineer to build and scale systems that power analytics and operational decision-making. You'll work with technologies like Snowflake, SQL, and dbt in a remote-first environment. This position requires experience in data engineering and a passion for collaboration.

Data Engineer
Confluent is hiring a Data Engineer to build efficient data pipelines that enable data accessibility across the organization. You'll work with technologies like Apache, Airflow, and Kafka. This position requires strong technical capabilities and experience in data engineering.

Data Engineer
Texture is hiring a Senior Data Engineer to design and optimize data pipelines for a unified data network in the energy sector. You'll work with technologies like BigQuery, Redshift, and Python to ensure high data quality and availability. This position requires at least 6 years of relevant experience.

Data Engineer
Protege is hiring a Data Engineer to design and implement scalable data orchestration processes. You'll work with Java or Python and AWS, optimizing data storage and retrieval. This position requires proven experience in data engineering.