We are looking for a Data Engineer for challenging project.
What you'll do:
* Act as the team's go-to specialist for Azure Data Factory (ADF), Databricks, Python, PySpark, and Spark SQL;
* Lead the day-to-day efforts to gather and ingest raw data into centralized enterprise data platforms;
* Collaborate with data and engineering teams to define robust data pipelines, standards, and flow documentation;
* Oversee end-to-end data ingestion and transformation processes across diverse data environments, including data lakes and warehouses;
* Continuously improve and fine-tune ingestion and integration workflows to maximize performance and reliability.
What you will need to bring:
* Bachelor's degree in Computer Science, Engineering, or a closely related field - or equivalent hands-on industry experience;
* Proven experience designing and deploying robust, production-level data pipelines using Azure Data Factory (ADF);
* Strong background working on enterprise-grade data lake or cloud data platform implementations, preferably in complex, multi-source environments;
* Advanced proficiency in SQL and one or more programming languages such as Python or R; working knowledge of T-SQL and data-centric scripting;
* Skilled in building event-driven data workflows utilizing Azure Functions, with the ability to connect and manage varied data sources;
* Solid experience delivering and maintaining both batch-processing and API-driven data pipelines across heterogeneous systems.
What can Syone offer me:
* Integration in an organization with profound and sustained growth and involvement in pioneering projects with innovative technological solutions;
* Strong IT training plans;
* Professional evolution with intervention in ambitious technological projects, both national and internationally.