We are looking for a Data Engineer - Medicine & Analytics, for our pharmaceutical client.
This role offers more than a position— it provides the opportunity to shape the future of data and analytics within Medicine. As a Data Engineer, you will build and evolve reliable, scalable data products that enable high‐impact insights, accelerate decision‐making, and improve patient outcomes.
You will be a key contributor in delivering a modern data ecosystem across cloud and enterprise platforms. With a strong engineering mindset, you'll drive change, embrace continuous improvement, and thrive in a fast‐moving environment where priorities evolve day by day.
TASKS & RESPONSIBILITIES
- Design, build, and maintain end‐to‐end data pipelines and integrations to support Medicine & Analytics use cases (batch and event‐driven patterns where relevant).
- Develop, operate, and optimize integrations using SnapLogic and AWS services such as S3, AWS Lambda, and AWS Glue to ensure robust ingestion, transformation, and orchestration.
- Implement and maintain analytics‐ready data models in Snowflake, ensuring performance, scalability, and cost‐efficient design.
- Build transformation logic and analytics layers using dbt, including modular modeling, testing, documentation, and deployment best practices.
- Contribute to and enforce data governance standards by leveraging tools such as Collibra, ensuring metadata quality, lineage, ownership, and consistent definitions.
- Partner with Data Quality stakeholders to implement and monitor quality controls using Attaccama, including rules, profiling, exception handling, and remediation workflows.
- Support data lifecycle processes and operationalization of data products using Innovator (as applicable in the ecosystem) to align delivery with platform and product standards.
- Proactively identify opportunities to simplify architecture, automate repetitive work, and reduce operational effort (observability, alerting, self‐healing patterns).
- Ensure all solutions follow security, privacy, and compliance expectations (e.g., regulated environment practices, audit readiness, access controls, data handling).
- Collaborate closely with Product Owners, Data Scientists, Analysts, Architects, and business stakeholders to translate needs into reliable, reusable data assets.
- Act as a role model for engineering excellence: version control, CI/CD, code reviews, documentation, and operational runbooks.
SKILLS
- Degree in Computer Science, Engineering, Data/Information Systems, or a related field, with several years of relevant experience in data engineering, analytics engineering, or similar roles.
- Hands‐on experience building integrations and pipelines using tools such as SnapLogic (or comparable iPaaS) and cloud services—specifically AWS S3, Lambda, and Glue.
- Strong experience with Snowflake including data modeling, performance tuning, and secure data access patterns.
- Proven experience with dbt (models, tests, macros, documentation, environments, CI/CD integration).
- Familiarity with data governance and metadata management, ideally with Collibra; understanding of lineage, stewardship, and data catalog practices.
- Experience implementing data quality controls and monitoring, ideally with Attaccama (or equivalent tooling and approaches).
- Solid knowledge of software engineering fundamentals: Python/SQL, Git, coding standards, automated testing, and production support practices.
- Demonstrated ability to work independently, manage priorities, and proactively drive work forward in a dynamic environment.
- Strong stakeholder management, analytical thinking, and structured problem‐solving skills.
- Excellent communication skills in English (Spanish is a strong plus), enabling clear interaction with technical and non‐technical stakeholders.
NICE TO HAVE
3-5 years in a similar role
SCHEDULE
- 08/09h -17/18h from Monday to Friday (flexible)
- 100% remote with occasional visits to the office.
CONDITIONS
- Salary package based on your profile.
- Permanent Contract
- Learning & Development
#J-18808-Ljbffr