Elevus is a business group that offers innovative solutions in the Human Resources market. We have been on the market since 2001, providing HR solutions not only in Portugal but also on the international market. We are currently recruiting a Data Engineer. These are the specific functions of the job: Management of cloud services for data lake construction and maintenance (e.g., Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, Amazon EMR, Amazon RDS, AWS Step Functions, and AWS Data Pipeline), with a focus on event-based, non-relational data sources using Kafka connectors. Development and maintenance of ETL pipelines using AWS Glue, Python, PySpark, and SQL to ingest data from various sources into the data lake. Design and implementation of AWS Glue jobs for data transformation and processing. Implement data security measures and access controls, ensuring compliance with company and industry regulations. Implementing DevOps practices, including Infrastructure as Code (IaC) and Continuous Integration/Continuous Deployment (CI/CD) for data pipelines. Preparation of data for predictive and prescriptive modeling. Management of data lake metadata and documentation to facilitate data discovery and lineage tracking. Proactive monitoring, performance tuning, and cost optimization of data lake solutions. Utilizing AWS QuickSight for data visualization and reporting. Understanding the data platform and business requirements to propose innovative solutions for the client. Maintain fluid relations with all stakeholders related to the activity, including partners and providers in the mobility and tech landscape. Skills and qualifications, we are looking for: Graduate degree in Computer Science or equivalent (master's is valuable). In-depth knowledge of AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, Amazon EMR, Amazon RDS, AWS Step Functions, and AWS Data Pipeline for data lake construction. Proven track record in ETL processes, cloud-native solutions, and SQL/NoSQL databases. Proficiency in Python and PySpark. Experience with event-based, non-relational data sources and integrating data using Kafka connectors. Expertise in designing and executing AWS Glue jobs. Proficiency with AWS QuickSight for data visualization and analytics. Additional training (optional): Azure and Google Clouds, AWS Certified Data Analytics Specialty, Java, .NET. High-level English proficiency. Data Engineering and Management Experience (> five years), with experience in the mobility services landscape and forecast/predictive models.