About Us
">
We are a specialized Software Development company dedicated to shaping the future and creating value through innovative IT solutions.
We give clients all the expertise they need in one place. Inspired by the principles of quantum physics, we go beyond traditional boundaries to provide customized solutions that redefine the IT landscape and drive shared success.
If you thrive in modern data environments, working with large-scale data pipelines, we want to hear from you.
Key Responsibilities:
* Engage in the modernization of ETL libraries using Python and Spark;
* Work with Delta Tables in Databricks, optimizing data pipelines for scalability and performance;
* Design and implement new data ingestion and curation processes for downstream analytics;
* Own and evolve the Data Quality framework using Great Expectations with Spark;
* Collaborate with data engineers, analysts, and platform teams to deliver high-quality, reliable datasets;
* Bring a software engineering mindset to everything you build — testable, versioned, and production-ready.
What We're Looking For:
* 5+ years of experience in Data Engineering or Big Data environments;
* Deep expertise in Spark and PySpark (distributed processing, performance tuning, UDFs, etc.);
* Strong proficiency in Python and SQL;
* Hands-on experience with Databricks and Delta Lake;
* Experience with Great Expectations or similar data validation frameworks;
* Strong communication skills and the ability to work autonomously in cross-functional teams.
Nice to Have:
* Experience with cloud platforms (Azure, AWS, or GCP);
* Familiarity with CI/CD, containerization (Docker), and orchestration tools (Airflow, dbt, etc.);
* Previous experience mentoring junior engineers or leading technical initiatives.
We're looking for a Remote position, based in Portugal.