We seek an experiencedData Scientistto join a cutting-edge AI team working on scalable, production-ready solutions.
Location:
Lisbon or Porto (60% remote / 40% on-site)
Languages required:
English (minimum B2)
Responsibilities:
Develop, fine-tune, and deploy GenAI modelsusingAWS Bedrock, SageMaker, andLambda
Work withLLMs,embeddings,transformers, anddiffusion modelsfor NLP and image generation
Optimizeprompt engineering,fine-tuning, andRLHF (Reinforcement Learning from Human Feedback)techniques
Build scalableMLOps pipelinesusing SageMaker, ECS, and Kubernetes
Manage large-scale datasets withAWS Glue, Athena, andRedshift
Implementvector databases(Pinecone, Weaviate, FAISS, Amazon OpenSearch) forRAG (Retrieval-Augmented Generation)systems
Design and optimizeETL pipelinesfor AI/ML workflows
Collaborate withDevOps, Software Engineers, and product teams to deploy AI models into APIs and applications
Ensuredata privacy, compliance, andmodel security
Monitor model performance and retraining needs usingCloudWatch, MLFlow, and observability tools
Requirements:
Proven background inData Science,Machine Learning, andGenerative AI
Strong skills inPython,SQL, and ML frameworks such asTensorFlow,PyTorch, andHugging Face Transformers
Expertise withAWS AI/ML stack:
SageMaker, Bedrock, Lambda, Comprehend
Experience withLLMs, embeddings, transformers, and diffusion models
Familiarity withRAG, vector databases, andknowledge graphs
Experience inMLOps,Docker,Kubernetes, andCI/CD pipelines
Understanding ofcloud optimization,distributed computing, andAI model scaling
Data engineering experience withGlue,Athena, orSpark
Knowledge ofNLP,image generation, ormultimodal AI
Interested candidates should send their CV and rate to or apply directly.