Job Summary
* We are seeking an experienced Data Engineer to join our organization and support the development of large-scale data pipelines.
* Key Responsibilities:
o Design, build, and optimize data pipelines using Databricks PySpark in production environments;
o Work with complex datasets, ensuring performance, scalability, and reliability;
o Collaborate with cross-functional teams to translate business requirements into technical solutions;
o Apply best practices in data engineering, including testing, monitoring, and performance tuning;
o Leverage cloud platforms (Azure, AWS, or GCP) to deliver scalable, high-performing solutions;
o Ensure data quality, governance, and compliance across the entire pipeline lifecycle.
Requirements
* Strong expertise in Python and SQL;
* Proven track record of delivering Databricks PySpark pipelines into production;
* Solid, hands-on experience with Databricks;
* Experience with at least one major cloud provider (Azure, AWS, or GCP);
* Demonstrated ability to work with large datasets and design robust, scalable pipelines.
About the Opportunity
* Join a dynamic and collaborative environment with cutting-edge technologies;
* Opportunity to work on innovative, large-scale data engineering solutions;
* Competitive compensation package aligned with your experience and impact;
* Continuous learning and career development opportunities in Data Engineering.