Design, build, and optimize data pipelines to drive business growth. We're looking for a skilled Data Engineer who can shape the future of software development through innovative solutions.
Our ideal candidate is experienced in designing, building, and optimizing batch and streaming data pipelines in Databricks (PySpark, Spark SQL).
* Implement scalable data transformations aligned with our architecture principles;
* Evaluate and ensure data quality, reliability, and performance through thorough testing and monitoring;
* Develop and manage data infrastructure using Terraform and GitOps principles;
* Operate workflows with Airflow on Azure Kubernetes Service (AKS);
* Collaborate closely with Data Architects, Project Managers, and stakeholders to align on solutions and delivery;
We're interested in candidates with strong experience with Databricks, PySpark, and Spark SQL; proven expertise in batch and streaming data processing; hands-on experience with Azure Data Lake Storage Gen2 (ADLS); solid knowledge of Airflow, preferably on Kubernetes (AKS); understanding of Medallion Architecture principles; familiarity with Terraform and infrastructure-as-code practices; awareness of data privacy, governance, and security standards.
Nice to Have:
* Experience with Talend and/or Fivetran;
* Knowledge of Databricks Asset Bundles;
* Familiarity with Vault, Helm charts, and Kafka monitoring tools.
Location: Remote, Portugal
Let's Work Together!