Skip to main content
Nebius AI Cloud resources that you create—virtual machines, clusters for Kubernetes etc.—belong to Nebius AI Cloud services.

List of services

Compute

  • Compute – virtual machines and containers over VMs (all regions) Compute allows you to create and manage virtual machines. They are like your local machine, but in the cloud. You can connect to them and use their GPUs and other computing resources in your ML/AI workloads. Grouping VMs with GPUs into GPU clusters enables fast InfiniBand™ interconnection, and containers over VMs allow you to launch containerized workloads in a few clicks. Documentation
  • Managed Service for Soperator (all regions) Managed Service for Soperator provides Slurm-based clusters for distributed ML/AI training. Soperator is an open-source Kubernetes operator from Nebius that combines features from Slurm and Kubernetes. This allows you to efficiently manage both your ML workloads and the underlying infrastructure. Documentation
  • Managed Service for Kubernetes® (all regions) With Kubernetes, you can efficiently manage and scale your containerized ML/AI applications and ensure they are portable and fault-tolerant. Managed Service for Kubernetes offered by Nebius AI Cloud provides streamlined experience of deploying and managing clusters powered by Kubernetes. Documentation

Storage

In Nebius AI Cloud, you can store data in POSIX-compliant VM volumes, Amazon S3-like buckets, registries for Docker and Helm artifacts, and PostgreSQL® databases.
  • Compute – disks and shared filesystems (all regions) Virtual machines in Compute use volumes that are also part of the Compute service. Disks are block storage volumes, each of them is used on one VM at a time as a boot disk or an additional (secondary) disk. Shared filesystems are file storage volumes that can be shared by multiple VMs. Both types function as POSIX-compliant devices for VMs. Documentation
  • Object Storage (all regions) Most ML/AI workloads involve large files, such as datasets and artifacts of trained models. To store, access and share them efficiently, you can use Object Storage, an Amazon S3-like storage service offered by Nebius AI Cloud. Documentation
  • Managed Service for PostgreSQL (all regions except eu-north2) Database systems such as PostgreSQL are an important part of MLOps: you can set up your ML tools to ingest data from databases, as well as store metadata and other artifacts in them. Managed Service for PostgreSQL in Nebius AI Cloud offers an easy way to deploy and work with fully managed PostgreSQL databases. Documentation
  • Container Registry (all regions) Container Registry is your best choice to safely store your application containerized into Docker images and Helm charts, and access them easily in workloads that you run in Nebius AI Cloud. Documentation

AI services

  • Serverless AI (all regions) Serverless AI is a service for running containerized AI workloads as interactive endpoints or non-interactive jobs. By deploying your workloads in Serverless AI, you can focus on them without worrying about the infrastructure: the service handles resource provisioning and lifecycle, and usage-based, per-second billing. Documentation
  • Managed Service for MLflow (all regions except eu-north2.id and eu-west1.id) MLflow is a highly available platform for managing a lifecycle of machine learning experiments. You can track them, organize models into detailed versions, compare metrics and deploy customized models. Managed MLflow in Nebius AI Cloud enables you to access model artifacts, training results and tuned hyperparameters in a single interface. As a result, you can reproduce ML experiments and deploy the best performing models. Documentation
  • Standalone Applications (all regions) The service offers applications that deploy and manage their own infrastructure. Documentation

Observability

  • Monitoring (in preview; all regions) Monitoring collects and visualizes resource metrics, and allows you to set custom alerts that are triggered when these metrics cross the specified thresholds. Documentation
  • Logging (in preview; all regions) Logging collects, stores and provides logs for your Nebius AI Cloud services. It allows you to view all logs in one place and debug issues faster. Documentation

Network

  • Virtual Networks (all regions) The service provides networking infrastructure, so you can create Nebius AI Cloud resources within it. Virtual Networks enables routing, segmentation and IP addressing for your resources in the tenant. Documentation

Management

  • Identity and Access Management (all regions) Identity and Access Management helps you centrally manage user and service account access to your Nebius AI Cloud resources. It ensures that only users and service accounts with specific permissions can interact with your resources. Documentation

Security

  • Audit Logs (all regions) With the Audit Logs, you can see who did what and when with your resources, to ensure security, compliance and accountability. Documentation
  • MysteryBox (all regions) The service stores sensitive data as secrets in an encrypted form. For instance, you can store API keys, tokens and certificates securely. This allows you to avoid hardcoding sensitive data, and still allows you to reuse that data in scripts, pipelines and applications. Documentation

Service and application stages

Each Nebius AI Cloud service or application is either in preview or generally available.

Preview

Services and applications in preview are intended for testing and feedback. They come without guaranteed service levels, and you may encounter issues and bugs when using them. It is not recommended to use services and applications in preview for production purposes. Nebius AI Cloud provides services and applications in preview free of charge. In some cases, access to a service or an application in preview might be limited. A service or an application in preview can be different from its final, generally available version, and is not guaranteed to become generally available at all. Most services and applications in preview become paid after becoming generally available, but you will never be retrospectively charged for using a service or an application while it was free of charge in preview. Parts of services and applications, such as their individual features, can also be in preview.

General availability

Generally available services and applications are ready to use in production environments. They are fully supported by Nebius AI Cloud and usually covered by service level agreements (SLAs). Usually, you are charged for using generally available services and applications.
InfiniBand and InfiniBand Trade Association are registered trademarks of the InfiniBand Trade Association. Postgres, PostgreSQL and the Slonik Logo are trademarks or registered trademarks of the PostgreSQL Community Association of Canada, and used with their permission.