Skip to main content
The workload is moved automatically after you evict it from the existing node group. Before moving your workload, you need to:
  1. Install jq if you don’t have it on your system.
  2. Get your cluster ID which is returned in the .metadata.id field of the cluster resource.
The CLI commands in this guide assume that the cluster ID is saved to an environment variable NB_K8S_CLUSTER_ID. To move your workload:
  1. Create a node group.
    For example, the following command creates a group of two nodes, each with one NVIDIA H100 GPU, and all drivers and components required for the GPU:
    nebius mk8s node-group create \
      --parent-id $NB_K8S_CLUSTER_ID \
      --name mk8s-node-group-test \
      --fixed-node-count 2 \
      --template-resources-platform gpu-h100-sxm \
      --template-resources-preset 1gpu-16vcpu-200gb \
      --template-gpu-settings-drivers-preset cuda12
    
  2. Export to an environment variable the name of the node group from which you want to move the workload. For example, you can get it by the node group’s name (if you have set and know it) and the parent cluster’s ID:
    export NB_K8S_NODE_GROUP_ID=$(nebius mk8s node-group get-by-name \
      --parent-id $NB_K8S_CLUSTER_ID \
      --name node-group-name \
      --format jsonpath='{.metadata.id}')
    
  3. Get the list of node names in the node group from which you want to move the workload:
    export OLD_NODES=$(kubectl get nodes -o json \
      | jq '.items[].metadata
        | select(.annotations."cluster.x-k8s.io/owner-name" = "$NB_K8S_NODE_GROUP_ID")
        | .name')
    
  4. Cordon off the old nodes so that no new pods are scheduled on them:
    for node in $OLD_NODES; do
      kubectl cordon $node;
    done
    
  5. Drain the old nodes so that the existing pods can be evicted from them:
    for node in $OLD_NODES; do
      kubectl drain --force --ignore-daemonsets --delete-emptydir-data $node;
    done
    
    Kubernetes will automatically move evicted pods to suitable nodes.
  6. Delete the old node group:
    nebius mk8s node-group delete --id $NB_K8S_NODE_GROUP_ID