Job Overview
Planck Technologies is a software development company dedicated to shaping futures and creating value through innovative IT solutions.
Key Responsibilities:
* Design, build, and optimize batch and streaming data pipelines in Databricks (PySpark, Spark SQL);
* Implement scalable data transformations aligned with the Medallion Architecture (Bronze, Silver, Gold);
* Ensure data quality, reliability, and performance through testing and monitoring;
* Manage data infrastructure using Terraform and GitOps principles;
* Operate workflows with Airflow on Azure Kubernetes Service (AKS);
* Collaborate with Data Architects, Project Managers, and stakeholders to align on solutions and delivery;
* Participate in code reviews and contribute to knowledge sharing within the engineering team;
* Document workflows, processes, and deployment standards.
Requirements:
* Strong experience with Databricks, PySpark, and Spark SQL;
* Proven expertise in batch and streaming data processing;
* Hands-on experience with Azure Data Lake Storage Gen2 (ADLS);
* Solid knowledge of Airflow, preferably on Kubernetes (AKS);
* Understanding of Medallion Architecture principles;
* Familiarity with Terraform and infrastructure-as-code practices;
* Awareness of data privacy, governance, and security standards.
Preferred Skills:
* Experience with Talend and/or Fivetran;
* Knowledge of Databricks Asset Bundles;
* Familiarity with Vault, Helm charts, and Kafka monitoring tools.
Work Environment:
Remote, Portugal