About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise as well as personal needs. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute - a suite that brings frontier intelligence to end-users.
We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Role Summary
This role focuses on building and operating the next generation of data infrastructure at Mistral AI. You will be a core contributor to our evolution, helping us design and scale massive compute fleets and storage systems designed for high performance and scalability.
You will help us move toward a future of decoupled control and data planes, scaling big data compute and storage platforms while ensuring secure and governed data access for MLOps and research. You will take full lifecycle ownership: from architecting the migration away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.
You will help us move toward a future of decoupled control and data planes, scaling big data compute and storage platforms while ensuring secure and governed data access for MLOps and research. You will take full lifecycle ownership: from architecting the migration away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.
What will you do
• Build & Scale: Help us reach our goal of operating massive distributed compute and storage systems
• Global Orchestration: Architect and maintain multi-cluster orchestration layers to optimize workload placement across diverse hardware and regions.
• Design Future-Proof Storage: Architect our transition to modern storage formats to handle fine-tuning datasets at a scale that anticipates exabyte growth.
• Platform Engineering: Contribute to the development of our internal training platform, ensuring seamless model training and fine-tuning capabilities across Kubernetes and SLURM based environments.
• Metadata & Lineage: Implement and manage systems to provide clear visibility and lineage as our data and model pipelines grow in complexity.
• Operational Excellence: Use modern deployment workflows to manage cloud-native deployments, ensuring our data platform can scale by o
• Design Future-Proof Storage: Architect our transition to modern storage formats to handle fine-tuning datasets at a scale that anticipates exabyte growth.
• Platform Engineering: Contribute to the development of our internal training platform, ensuring seamless model training and fine-tuning capabilities across Kubernetes and SLURM based environments.
• Metadata & Lineage: Implement and manage systems to provide clear visibility and lineage as our data and model pipelines grow in complexity.
• Operational Excellence: Use modern deployment workflows to manage cloud-native deployments, ensuring our data platform can scale by o
About you
• Have 4+ years of experience in Data Infrastructure, MLOps, or Infrastructure Engineering.
• Have experience or a strong interest in supporting foundational compute and storage platforms.
• Are proficient in Python and enjoy solving the "brittle data lake" problem with modern, columnar storage standards.
• Are well-versed in Kubernetes-native tooling and excited to debug large-scale distributed systems across multi-cluster environments.
• Take pride in building and operating scalable, reliable, and secure systems from the ground up.
• Are comfortable with ambiguity and the challenges of building high-scale infrastructure in a rapid-growth AI environment.
• Have experience or a strong interest in supporting foundational compute and storage platforms.
• Are proficient in Python and enjoy solving the "brittle data lake" problem with modern, columnar storage standards.
• Are well-versed in Kubernetes-native tooling and excited to debug large-scale distributed systems across multi-cluster environments.
• Take pride in building and operating scalable, reliable, and secure systems from the ground up.
• Are comfortable with ambiguity and the challenges of building high-scale infrastructure in a rapid-growth AI environment.
Role Summary
This role focuses on building and operating the next generation of data infrastructure at Mistral AI. You will be a core contributor to our evolution, helping us design and scale massive compute fleets and storage systems designed for high performance and scalability.
You will help us move toward a future of decoupled control and data planes, scaling big data compute and storage platforms while ensuring secure and governed data access for MLOps and research. You will take full lifecycle ownership: from architecting the migration away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.
You will help us move toward a future of decoupled control and data planes, scaling big data compute and storage platforms while ensuring secure and governed data access for MLOps and research. You will take full lifecycle ownership: from architecting the migration away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.
What will you do
• Build & Scale: Help us reach our goal of operating massive distributed compute and storage systems
• Global Orchestration: Architect and maintain multi-cluster orchestration layers to optimize workload placement across diverse hardware and regions.
• Design Future-Proof Storage: Architect our transition to modern storage formats to handle fine-tuning datasets at a scale that anticipates exabyte growth.
• Platform Engineering: Contribute to the development of our internal training platform, ensuring seamless model training and fine-tuning capabilities across Kubernetes and SLURM based environments.
• Metadata & Lineage: Implement and manage systems to provide clear visibility and lineage as our data and model pipelines grow in complexity.
• Operational Excellence: Use modern deployment workflows to manage cloud-native deployments, ensuring our data platform can scale by o
• Design Future-Proof Storage: Architect our transition to modern storage formats to handle fine-tuning datasets at a scale that anticipates exabyte growth.
• Platform Engineering: Contribute to the development of our internal training platform, ensuring seamless model training and fine-tuning capabilities across Kubernetes and SLURM based environments.
• Metadata & Lineage: Implement and manage systems to provide clear visibility and lineage as our data and model pipelines grow in complexity.
• Operational Excellence: Use modern deployment workflows to manage cloud-native deployments, ensuring our data platform can scale by o
About you
• Have 4+ years of experience in Data Infrastructure, MLOps, or Infrastructure Engineering.
• Have experience or a strong interest in supporting foundational compute and storage platforms.
• Are proficient in Python and enjoy solving the "brittle data lake" problem with modern, columnar storage standards.
• Are well-versed in Kubernetes-native tooling and excited to debug large-scale distributed systems across multi-cluster environments.
• Take pride in building and operating scalable, reliable, and secure systems from the ground up.
• Are comfortable with ambiguity and the challenges of building high-scale infrastructure in a rapid-growth AI environment.
• Have experience or a strong interest in supporting foundational compute and storage platforms.
• Are proficient in Python and enjoy solving the "brittle data lake" problem with modern, columnar storage standards.
• Are well-versed in Kubernetes-native tooling and excited to debug large-scale distributed systems across multi-cluster environments.
• Take pride in building and operating scalable, reliable, and secure systems from the ground up.
• Are comfortable with ambiguity and the challenges of building high-scale infrastructure in a rapid-growth AI environment.
Role Summary
About the Research Engineering team
The team spans Platform (shared infra & clean code) and Embedded (inside research squads). Engineers can move along the research↔production spectrum as needs or interests evolve.
As a Research Engineer – ML track, you’ll build and optimise the large-scale learning systems that power our open-weight models. Working hand-in-hand with Research Scientists, you’ll either join:
- Platform RE Team: Enhance the shared training framework, data pipelines and cluster tooling used by every team; or
- Embedded RE Team: Sit inside a research squad (Alignment, Pre-training, Multimodal, …) and turn fresh ideas into repeatable, scalable code.
What will you do
• Accelerate researchers by taking on the heavy parts of large-scale ML pipelines and building robust tools.
• Interface cutting-edge research with production: integrate checkpoints, streamline evaluation, and expose APIs.
• Conduct experiments on the latest deep-learning techniques (sparsified 70 B + runs, distributed training on thousands of GPUs).
• Design, implement and benchmark ML algorithms; write clear, efficient code in Python.
• Deliver prototypes that become production-grade components for Le Chat and our enterprise API.
About you
• Master’s or PhD in Computer Science (or equivalent proven track record).
• 4 + years working on large-scale ML codebases.
• Hands-on with PyTorch, JAX or TensorFlow; comfortable with distributed training (DeepSpeed / FSDP / SLURM / K8s).
• Experience in deep learning, NLP or LLMs; bonus for CUDA or data-pipeline chops.
• Strong software-design instincts: testing, code review, CI/CD.
• Self-starter, low-ego, collaborative.
What we offer
By applying, you agree to our Applicant Privacy Policy.