P Project Description: p The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
/pp Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
/pp Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.
/pp Responsibilities: /pp We are looking for Data Engineer who will be responsible for designing a solution for a big retail company.
The main focus is to support processing of big data volumes and integrate solution to current architecture.
/pp Mandatory Skills Description: /pp• Overall years of experience required 8+ /pp• Strong, recent hands-on expertise with Azure Data Factory and Synapse is a must (3+ years).
/pp• Strong expertise in designing and implementing data models, including conceptual, logical, and physical data models, to support efficient data storage and retrieval.
/pp• Hands-on experience with Power BI, including data modeling, report and dashboard development, and building interactive, business-ready visualizations based on enterprise data sources.
/pp• Strong knowledge of Microsoft Azure, including Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, and Azure Databricks, pySpark for building scalable and reliable data solutions.
/pp• Extensive experience with building robust and scalable ETL/ELT pipelines to extract, transform, and load data from various sources into data lakes or data warehouses.
/pp• Ability to integrate data from disparate sources, including databases, APIs, and external data providers, using appropriate techniques such as API integration or message queuing.
/pp• Proficiency in designing and implementing data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault) /pp• Proficiency in SQL to perform complex queries, data transformations, and performance tuning on cloud-based data storages.
/pp• Experience integrating metadata and governance processes into cloud-based data platforms /pp• Certification in Azure, Databricks, or other relevant technologies is an added advantage /pp• Experience with cloud-based analytical databases.
/pp• Experience with Azure MI, Azure Database for Postgres, Azure Cosmos DB, Azure Analysis Services, and Informix.
/pp• Experience with Python and Python-based ETL tools.
/pp• Experience with shell scripting in Bash, Unix or windows shell is preferable.
/pp Nice-to-Have Skills Description: /pp• Experience with Elasticsearch /pp• Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
/pp• Troubleshooting and Performance Tuning: Ability to identify and resolve performance bottlenecks in data processing workflows and optimize data pipelines for efficient data ingestion and analysis.
/pp• Collaboration and Communication: Strong interpersonal skills to collaborate effectively with stakeholders, data engineers, data scientists, and other cross-functional teams.
/pp Languages: English: B2 Upper Intermediate /pp Working hours: ***** CET .
Occasionally, could be extended to 20 CET due to client meeting.
/pp We offer numerous benefits such as: /pp Flexible work schedule /pp Great company culture and friendly environment /pp Work within a fast-moving, exciting, and challenging environment /pp Talent development ecosystem /pp Luxoft Training Center services with ad-hoc leadership and technical programs /pp Knowledge sharing in professional communities /pp Meetings for knowledge sharing, celebrations, and brainstorming: your ideas count!
/pp Regular team-building activities /pp Variety of discounts for our employees /p /p