Skip to main content
In addition to login and worker nodes that you create in your Managed Service for Soperator clusters, every cluster contains service nodes that facilitate the cluster’s operation. Service nodes are billed and count towards quotas on vCPUs. You cannot configure service nodes.

Service node types

Managed Soperator service nodes vary by function:
  • Controller nodes orchestrate Slurm activities, such as job queuing, monitoring node states and allocating resources.
  • Accounting nodes, also known as database daemon nodes or DBD nodes, collect accounting information for jobs and job steps that you run in the cluster.
  • Soperator system nodes host Soperator tools that manage Nebius AI Cloud resources, certificates and telemetry.
For more details about controller and accounting nodes, see Slurm documentation.

Service nodes in clusters

Each Managed Soperator cluster contains the following service nodes:
TypeNumber of nodesCompute per nodeStorage per node
Controller nodes2Non-GPU AMD EPYC Genoa, 8vcpu-32gbNetwork SSD disk, 512 GiB
Accounting nodes1Non-GPU AMD EPYC Genoa, 8vcpu-32gbNetwork SSD disk, 256 GiB;
Network SSD IO M3 disk, 1024 GiB
Soperator system nodes3–5, autoscaled depending on loadNon-GPU AMD EPYC Genoa, 8vcpu-32gbNetwork SSD disk, 512 GiB
Therefore, for billing and quota purposes, at least the following computing and storage resources are added to login and worker nodes of a cluster:
  • Non-GPU AMD EPYC Genoa: 48 vCPUs, 192 GiB RAM
  • Network SSD disks: 2816 GiB
  • Network SSD IO M3 disk: 1024 GiB