About Capgemini
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible.
Your Role
We are looking for an expert in Generative AI to join our team. As a key member of our engineering team, you will be responsible for:
* Developing, fine-tuning, and deploying Generative AI models using AWS services such as Bedrock, SageMaker, and Lambda.
* Working with Large Language Models (LLMs), embeddings, transformers, and diffusion models for applications in NLP, image generation, and AI automation.
* Optimizing prompt engineering, fine-tuning, and Reinforcement Learning from Human Feedback (RLHF) techniques.
* Building scalable MLOps pipelines for training and deploying GenAI models using SageMaker, ECS, and Kubernetes.
* Processing and managing large-scale datasets for AI training using AWS Glue, Athena, and Redshift.
* Implementing vector databases (Pinecone, Weaviate, FAISS, Amazon OpenSearch) for efficient retrieval-augmented generation (RAG) applications.
* Designing and optimizing ETL pipelines for AI/ML data workflows.
* Collaborating with software engineers, DevOps, and product teams to integrate AI models into applications and APIs.
* Ensuring security, compliance, and data privacy in AI/ML workflows.
* Monitoring AI model performance and retraining needs using AWS CloudWatch, MLFlow, and other observability tools.
Your Profile
To succeed in this role, you will need:
* A strong background in Data Science, Machine Learning, and Generative AI.
* Proficiency in Python, SQL, and ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers).
* Experience with AWS AI/ML services such as SageMaker, Bedrock, Lambda, and Comprehend.
* Hands-on experience with LLMs, embeddings, transformers, and diffusion models.
* Familiarity with Retrieval-Augmented Generation (RAG), vector databases, and knowledge graphs.
* Experience in MLOps, containerization (Docker, Kubernetes, ECS), and CI/CD for ML pipelines.
* A solid understanding of cloud optimization, distributed computing, and model scaling.
* Strong data engineering skills for processing large datasets in AWS Glue, Athena, or Spark.
Nice to Have
We are also interested in candidates with experience in:
* Fine-tuning open-source models (LLaMA, Falcon, Mistral, Stable Diffusion).
* AWS certifications such as AWS Certified Machine Learning – Specialty.
* Real-time AI applications, chatbot development, or autonomous agents.
* Knowledge of ethical AI, bias mitigation, and AI safety best practices.
About Working Here
You'll love working here because:
* Our team environment is multicultural and inclusive.
* We promote work-life balance and offer a supportive atmosphere.
* You'll have the opportunity to engage in exciting national and international projects.
* Your career growth is central to our mission.
* We offer training and certifications programs.
* Health and life insurance.
* A referral program with bonuses for talent recommendations.
* Great office locations.