Backend Data Engineer
We are seeking an experienced Backend Data Engineer to join our team. As a key member of our engineering team, you will be responsible for designing and implementing scalable, efficient systems that integrate data and maintain its integrity.
As a Backend Data Engineer, your mission is to support the development of BI backend solutions by helping design and implement scalable, efficient systems that integrate data and maintain its integrity. You will collaborate with developers and cross-functional teams to implement guidelines and workflows that enhance the performance and reliability of our BI systems.
In this role, you'll contribute to the quality and consistency of data operations by handling tasks of varying complexity, with increasing autonomy and ownership. You'll be part of a team where continuous improvement and collaboration are key to delivering impactful, high-quality BI solutions.
Key Responsibilities:
* Interpreting and delivering business-oriented tickets related to data pipelines, including the creation, adjustment, and restructuring of data fields and tables.
* Managing and executing data operations, from column updates to complex movements across internal and external systems.
* Designing and maintaining data processes to ensure accurate data flow, supporting ongoing reporting requirements.
* Collaborating with data analysts and cross-functional teams to understand context, validate data outcomes, and ensure alignment between technical delivery and business expectations.
Requirements:
* At least three years of professional hands-on experience in developing data warehouse or data lake solutions, with a strong understanding of data modeling principles.
* Proficiency in Python and building reliable ETL/ELT pipelines to support business-driven data flows.
* Experience with SQL, including query building, optimization, and database management.
* Proficiency with Azure services such as Data Factory, Blob Storage, and Key Vault, along with the ability to use them in real-world data scenarios.
* Solid experience with Databricks, handling workflows, and managing data pipelines.
* Working knowledge of AWS native services (e.g., Glue / Data Pipeline, Airflow, S3, KMS, Step Functions) and hands-on cloud-native development principles.
* Understanding of data architecture concepts, including dimensional modeling and versioned data handling (e.g., SCD).
* Familiarity with Gitflow processes for CI/CD, contributing to pipeline development, and branch management.
Nice to Have:
* You know Microsoft Power Platform, Power Query, and Excel.
* You are familiar with the media/advertising industry.
Location:
Portugal Remote Status: fully yearly Salary: eur Employment type: Full Time