We're looking for a Senior Data Engineer to join our team and support one of our clients – a leading company with a strong focus on Databricks.
What You'll Do:
* Design, build, and optimize data pipelines using Databricks PySpark in production environments;
* Work with large-scale and complex datasets, ensuring performance, scalability, and reliability;
* Collaborate with cross-functional teams to translate business requirements into technical solutions ;
* Apply best practices in data engineering, including testing, monitoring, and performance tuning;
* Leverage cloud platforms (Azure, AWS, or GCP) to deliver scalable, high-performing solutions;
* Ensure data quality, governance, and compliance across the entire pipeline lifecycle.
What You Need to Succeed:
* Strong expertise in Python and SQL ;
* Proven track record of delivering Databricks PySpark pipelines into production ;
* Solid, hands-on experience with Databricks ;
* Experience with at least one major cloud provider (Azure, AWS, or GCP);
* Demonstrated ability to work with large datasets and design robust, scalable pipelines.
What We Offer:
* Integration into a solid, international project with a Databricks-driven company;
* Opportunity to work on innovative, large-scale data engineering solutions ;
* A dynamic and collaborative environment with cutting-edge technologies;
* Competitive compensation package, aligned with your experience and impact;
* Continuous learning and career development opportunities in Data Engineering.