Posted 5mo ago

Machine Learning DevOps - Cloud and Compute Cluster - R&D Support

@ Pathway
Kraków, Lesser Poland Voivodeship, Poland
RemoteFull Time
Responsibilities:Optimize infrastructure, Automate pipelines, Manage versioning
Requirements Summary:Linux, shell, cluster config; containerization with Slurm, Docker, Kubernetes; CI/CD; cloud (AWS, GCP, Azure) ML services; monitoring; IaC; ML pipelines; Python with ML libraries; Linux admin; eager to learn
Technical Tools Mentioned:Slurm, Docker, Kubernetes, GitHub Actions, Jenkins, Gitlab CI, AWS, GCP, Azure, Terraform, CloudFormation, MLflow, Kubeflow, Airflow, Metaflow, Python, TensorFlow, PyTorch, Grafana, CloudWatch, Prometheus, Loki
Save
Mark Applied
Hide Job
Report & Hide
Job Description

About Pathway

Pathway is shaking the foundations of artificial intelligence by introducing the world’s first post-transformer model that adapts and thinks just like humans. 

Pathway’s breakthrough architecture (BDH) outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. The company is trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.

Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Goeff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20. 

The company is backed by leading investors and advisors, including TQ Ventures and Lukasz Kaiser, co-author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California.

The opportunity

We are currently searching for a Machine Learning DevOps with experience in cloud and compute cluster management, scaling infrastructures, and Linux administration. 

Our development, ML training, and production environment is in the cloud, using several major cloud providers. We need support in managing and automating the processes, and scaling the infrastructure to growing team and production needs.

You Will

  • Optimize infrastructure for ML training and inference (e.g., GPUs, distributed compute).
  • Automate and maintain ML/LLM pipelines (data ingestion, training, validation, deployment).
  • Manage model versioning, reproducibility, and traceability.
  • Work with terabyte-large datasets. 
  • Implement ML-centric CI/CD practices.
  • Monitor model performance and data drift in production.
  • Collaborate with machine learning engineers, software engineers, and platform teams.

The role focuses on operationalizing machine learning models, ensuring scalability, reliability, and automation across the ML lifecycle.