Suvoda

About Suvoda

Streamlining clinical trials with advanced IRT solutions

🏢 Tech👥 251-1K📅 Founded 2012📍 Conshohocken, Pennsylvania, United States

Key Highlights

  • Headquartered in Conshohocken, PA
  • Specializes in SaaS for clinical trial management
  • 4-6 week deployment for IRT/IWRS solutions
  • Serves numerous biopharmaceutical clients

Suvoda, headquartered in Conshohocken, Pennsylvania, specializes in SaaS solutions for randomization and trial supply management in clinical trials. Their Interactive Response Technology (IRT/IWRS) is utilized by biopharmaceutical companies to streamline processes, boasting a deployment time of 4-6 ...

🎁 Benefits

Suvoda offers competitive salaries, equity options, generous PTO, and a flexible remote work policy to support work-life balance....

🌟 Culture

Suvoda fosters a culture centered around innovation in clinical trial technology, emphasizing collaboration and adaptability to meet client needs in a...

Overview

Suvoda is hiring a Data Engineer to evolve their data platform towards a data mesh architecture. You'll design and build domain-oriented data products and optimize ETL/ELT pipelines using AWS technologies. This role requires at least 4 years of experience in data engineering.

Job Description

Who you are

You have at least 4 years of experience in data engineering, demonstrating ownership of complex data systems and a solid understanding of data mesh principles and decentralized data architecture. Your technical background includes a Bachelor's degree in a relevant field such as Computer Science or Mathematics, equipping you with the skills necessary to tackle challenging data problems.

You possess strong expertise in AWS data lake technologies, including S3, Glue, Lake Formation, Athena, and Redshift. Your experience with ETL/ELT pipelines using AWS Glue and PySpark allows you to ensure scalable, high-performance data processing across platforms. You are also familiar with GraphQL APIs, which you will use to expose domain-owned data products.

Your collaborative spirit shines through as you work closely with product, engineering, and analytics teams to deliver robust, reusable data solutions. You are committed to supporting data governance, quality, observability, and API design best practices, ensuring that the data infrastructure is reliable and efficient.

You stay current with emerging technologies and industry trends, continuously seeking ways to evolve the data platform. Your ability to contribute to automation and CI/CD practices for data infrastructure and pipelines is a testament to your proactive approach in the field.

Desirable

Experience with near real-time analytics and reporting is a plus, as is familiarity with data governance frameworks. You are encouraged to apply even if your experience doesn't match every requirement, as your curiosity and willingness to learn are highly valued.

What you'll do

In this role, you will design and build domain-oriented data products that support near real-time reporting. You will contribute to the design and implementation of a data mesh architecture, ensuring that data products are easily accessible and manageable. Your responsibilities will include building and maintaining a modern AWS-based data lake, utilizing services such as S3, Glue, Lake Formation, Athena, and Redshift to create a robust data environment.

You will develop and optimize ETL/ELT pipelines using AWS Glue and PySpark, supporting both batch and streaming data workloads. Implementing AWS DMS pipelines to replicate data into Aurora PostgreSQL for near real-time analytics will be a key part of your role, allowing for timely insights and reporting.

Collaboration is essential, as you will work with various teams to deliver data solutions that meet business needs. You will also support data governance initiatives, ensuring that data quality and observability are prioritized in all projects. Your contributions to automation and CI/CD practices will help streamline data infrastructure and enhance overall efficiency.

What we offer

Suvoda provides a dynamic work environment where innovation is encouraged. You will have the opportunity to work with cutting-edge technologies and contribute to the evolution of our data platform. We value your expertise and are committed to supporting your professional growth through continuous learning opportunities and collaboration with talented colleagues.

Join us in shaping the future of data engineering at Suvoda, where your skills will make a significant impact on our data-driven initiatives. We look forward to welcoming you to our team.

Interested in this role?

Apply now or save it for later. Get alerts for similar jobs at Suvoda.

Similar Jobs You Might Like

Based on your interests and this role

Suvoda

Data Engineer

Suvoda📍 Romania - Remote

Suvoda is seeking a Cloud Data Engineer to evolve their data platform towards a data mesh architecture. You'll design and build domain-oriented data products and optimize ETL/ELT pipelines using AWS Glue and PySpark. This position requires at least 4 years of experience in data engineering.

🏠 RemoteMid-Level
1w ago
Cribl

Data Engineer

Cribl📍 United States - Remote

Cribl is hiring a Data Engineer to build and scale systems that power analytics and operational decision-making. You'll work with technologies like Snowflake, SQL, and dbt in a remote-first environment. This position requires experience in data engineering and a passion for collaboration.

🏠 RemoteMid-Level
2w ago
Confluent

Data Engineer

Confluent📍 United Kingdom - Remote

Confluent is hiring a Data Engineer to build efficient data pipelines that enable data accessibility across the organization. You'll work with technologies like Apache, Airflow, and Kafka. This position requires strong technical capabilities and experience in data engineering.

🏠 RemoteMid-Level
2w ago
Texture

Data Engineer

Texture📍 Remote - Remote

Texture is hiring a Senior Data Engineer to design and optimize data pipelines for a unified data network in the energy sector. You'll work with technologies like BigQuery, Redshift, and Python to ensure high data quality and availability. This position requires at least 6 years of relevant experience.

🏠 RemoteSenior
1 year ago
Protege

Data Engineer

Protege📍 Remote - Remote

Protege is hiring a Data Engineer to design and implement scalable data orchestration processes. You'll work with Java or Python and AWS, optimizing data storage and retrieval. This position requires proven experience in data engineering.

🏠 RemoteSenior
1 year ago