Skip to main content
Nebius AI Cloud supports a wide range of workloads and software stacks, making it easy to run your machine learning jobs, endpoints and applications by using familiar tools and frameworks.

Runtimes

Nebius AI Cloud compute

You can launch and manage your workloads on compute resources that Nebius AI Cloud offers:
  • Compute virtual machines (VMs) Cloud-hosted VMs with NVIDIA GPU support for ML/AI workloads. Web console | Documentation
  • Compute containers over VMs Containerized applications with fast, flexible setup, deployed directly on VMs. Web console | Documentation
  • Managed Service for Kubernetes® clusters Fully managed Kubernetes clusters for scalable, containerized workloads. Web console | Documentation
  • Managed Service for Soperator clusters Slurm-based clusters for distributed ML/AI training, also available as an open-source Kubernetes operator. Web console | Documentation

Third-party orchestrators

Nebius AI Cloud integrates with leading open-source orchestrators that help you manage and automate workloads:
  • dstack Flexible workload manager for ML and data science. Tutorial
  • Metaflow (Outerbounds) Production-grade workflow management for ML pipelines. Blog post
  • SkyPilot Unified framework for running AI workloads across clouds, including Nebius AI Cloud. Tutorial

What you can run in Nebius AI Cloud

If your software runs on Ubuntu 20 or higher and supports containers, VMs or Kubernetes clusters, you can use Nebius AI Cloud runtimes to run it. This includes, but is not limited to, the tools, libraries and platforms listed below.
Use containerized deployment where possible for better portability, scalability and management.

ML libraries and frameworks

Distributed training frameworks

Inference runtimes

Ecosystem AI platforms