Nebius AI Cloud managed solutions
Managed Service for Soperator
Managed Service for Soperator allows you to deploy a Soperator cluster in any Nebius AI Cloud region with just a few clicks. The service takes care of the underlying infrastructure so that you can get started with Slurm and Soperator with minimum effort.GPU worker nodes in Soperator are only available if you have capacity block groups that reserve GPUs.
Pro Solution for Soperator
Pro Solution for Soperator is an expert-run solution from Nebius for customized or enterprise-scale GPU workloads. Our team of high-performance computing experts assists you with deploying a Soperator cluster and your applications on it. Depending on the scope and nature of your usage, Pro Solution for Soperator offers contracts with reserved capacity and discounted pricing. To sign up for Pro Solution for Soperator, contact sales.Self-deployment in Nebius AI Cloud
If you want to manually deploy a Soperator cluster in Nebius AI Cloud, you can use the Terraform recipe from the Nebius solution library. The recipe creates a Managed Service for Kubernetes cluster with Soperator and all additional Nebius AI Cloud resources, such as networks and shared filesystems. You can change settings in the recipe according to your needs.Self-deployment on other platforms and on-premises
You can install Soperator on any Kubernetes cluster that you deployed on a cloud platform or on-premises. For details, see Soperator’s GitHub repository.Soperator has not been tested on platforms other than Nebius AI Cloud. If you experience a problem when installing or using it, create an issue in the GitHub repository.