Retrieving the Data Source
This data source can be retrieved by one of ID or name.Retrieve by ID
To retrieve by ID, fill in only theid field:
Retrieve by Name
To retrieve by name, fill in only thename and parent_id fields:
Schema
Optional
id(String) Identifier for the resource, unique for its resource type.name(String) Human readable name for the resource.parent_id(String) Identifier of the parent resource to which the resource belongs.
Read-Only
-
auto_repair(Attributes) Parameters for nodes auto repair. (see below for nested schema) -
autoscaling(Attributes) : Enables Kubernetes Cluster Autoscaler for that NodeGroup, and defines autoscaling parameters. Cannot be set alongside fixed_node_count. (see below for nested schema) -
created_at(String) : Timestamp indicating when the resource was created. A string representing a timestamp in ISO 8601 format:YYYY-MM-DDTHH:MM:SSZorYYYY-MM-DDTHH:MM:SS.SSS±HH:MM -
fixed_node_count(Number) : Number of nodes in the group. Can be changed manually at any time. Cannot be set alongside autoscaling. -
labels(Map of String) : Labels associated with the resource. -
metadata(Attributes) :Inner value description
Common resource metadata. The parent_id is an ID of Cluster (see below for nested schema) -
resource_version(Number) : Version of the resource for safe concurrent modifications and consistent reads. Positive and monotonically increases on each resource spec change (but not on each change of the resource’s container(s) or status). Service allows zero value or current. -
status(Attributes) (see below for nested schema) -
strategy(Attributes) : Defines deployment - roll-out, or nodes re-creation during configuration change. Allows to setup compromise in roll-out speed, extra resources consumption and workloads disruption. (see below for nested schema) -
template(Attributes) : Parameters for Kubernetes Node object and Nebius Compute Instance If not written opposite a NodeTemplate field update will cause NodeGroup roll-out according NodeGroupDeploymentStrategy. (see below for nested schema) -
updated_at(String) : Timestamp indicating when the resource was last updated. A string representing a timestamp in ISO 8601 format:YYYY-MM-DDTHH:MM:SSZorYYYY-MM-DDTHH:MM:SS.SSS±HH:MM -
version(String) : Version is desired Kubernetes version of the cluster. For now only acceptable format is<major>.<minor>like “1.31”. Option for patch version update will be added later. By default the cluster control plane<major>.<minor>version will be used.
Nested Schema for auto_repair
Read-Only:
conditions(Attributes List) Conditions that determine whether a node should be auto repaired. (see below for nested schema)
Nested Schema for auto_repair.conditions
Read-Only:
-
disabled(Boolean) : When true, disables the default auto-repair condition rules. Cannot be set alongside timeout. -
status(String) : Node condition status.Supported values
Possible values:CONDITION_STATUS_UNSPECIFIEDTRUEFALSEUNKNOWN
-
timeout(String) : The duration after which the node is automatically repaired if the condition remains in the specified status. Duration as a string: possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as300ms,-1.5hor2h45m. Valid time units arens,us(orµs),ms,s,m,h,d. Cannot be set alongside disabled. -
type(String) Node condition type.
Nested Schema for autoscaling
Read-Only:
max_node_count(Number)min_node_count(Number)
Nested Schema for metadata
Nested Schema for status
Read-Only:
-
events(Attributes List) :Inner value description
A resource event that has occurred (more or less in the same way) multiple times across a service-defined aggregation interval (see below for nested schema) -
node_count(Number) : Total number of nodes that are currently in the node group. Both ready and not ready nodes are counted. -
outdated_node_count(Number) : Total number of nodes that has outdated node configuration. These nodes will be replaced by new nodes with up-to-date configuration. -
ready_node_count(Number) : Total number of nodes that successfully joined the cluster and are ready to serve workloads. Both outdated and up-to-date nodes are counted. -
reconciling(Boolean) Show that there are changes are in flight. -
state(String) :Supported values
Possible values:STATE_UNSPECIFIEDPROVISIONINGRUNNINGDELETING
-
target_node_count(Number) : Desired total number of nodes that should be in the node group. It is eitherNodeGroupSpec.fixed_node_countor arbitrary number betweenNodeGroupAutoscalingSpec.min_node_countandNodeGroupAutoscalingSpec.max_node_countdecided by autoscaler. -
version(String) : Actual version of NodeGroup. Have format<major>.<minor>.<patch>-nebius-node.<infra_version>like “1.30.0-nebius-node.10”. Where<major>.<minor>.<patch>is Kubernetes version and<infra_version>is version of Node infrastructure and configuration, which update may include bug fixes, security updates and new features depending on worker node configuration.
Nested Schema for status.events
Read-Only:
-
first_occurred_at(String) : Time of the first occurrence of a recurrent event A string representing a timestamp in ISO 8601 format:YYYY-MM-DDTHH:MM:SSZorYYYY-MM-DDTHH:MM:SS.SSS±HH:MM -
last_occurrence(Attributes) : Last occurrence of a recurrent eventInner value description
Represents an API Resource-related event which is potentially important to the end-user. What exactly constitutes an event to be reported is service-dependent (see below for nested schema) -
occurrence_count(Number) The number of times this event has occurred betweenfirst_occurred_atandlast_occurrence.occurred_at. Must be > 0
Nested Schema for status.events.last_occurrence
Read-Only:
-
code(String) Event code (unique within the API service), in UpperCamelCase, e.g."DiskAttached" -
level(String) : Severity level for the eventSupported values
Possible values:-
UNSPECIFIED- Unspecified event severity level -
DEBUG- A debug event providing detailed insight. Such events are used to debug problems with specific resource(s) and process(es) -
INFO- A normal event or state change. Informs what is happening with the API resource. Does not require user attention or interaction -
WARN: Warning event. Indicates a potential or minor problem with the API resource and/or the corresponding processes. Needs user attention, but requires no immediate action (yet) -
ERROR- Error event. Indicates a serious problem with the API resource and/or the corresponding processes. Requires immediate user action
-
-
message(String) : A human-readable message describing what has happened (and suggested actions for the user, if this is aWARNorERRORlevel event) -
occurred_at(String) : Time at which the event has occurred A string representing a timestamp in ISO 8601 format:YYYY-MM-DDTHH:MM:SSZorYYYY-MM-DDTHH:MM:SS.SSS±HH:MM
Nested Schema for strategy
Read-Only:
-
drain_timeout(String) : Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it’s pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn’t allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction. Note, that it is different fromkubectl drain --timeoutDuration as a string: possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as300ms,-1.5hor2h45m. Valid time units arens,us(orµs),ms,s,m,h,d. -
max_surge(Attributes) : The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%). When specified as a percentage, the actual number is calculated by rounding up to the nearest whole number. This value cannot be 0 ifmax_unavailableis also set to 0. Defaults to 1. Example: If set to 25%, the node group can scale up by an additional 25% during the update, allowing new nodes to be added before old nodes are removed, which helps minimize workload disruption. NOTE: it is user responsibility to ensure that there are enough quota for provision nodes above the desired number. Available quota effectively limitsmax_surge. In case of not enough quota even for one extra node, update operation will hung because of quota exhausted error. Such error will be visible in Operation.progress_data. (see below for nested schema) -
max_unavailable(Attributes) : The maximum number of nodes that can be simultaneously unavailable during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%). When specified as a percentage, the actual number is calculated by rounding down to the nearest whole number. This value cannot be 0 ifmax_surgeis also set to 0. Defaults to 0. Example: If set to 20%, up to 20% of the nodes can be taken offline at once during the update, ensuring that at least 80% of the desired nodes remain operational. (see below for nested schema)
Nested Schema for strategy.max_surge
Read-Only:
count(Number) Cannot be set alongside percent.percent(Number) Cannot be set alongside count.
Nested Schema for strategy.max_unavailable
Read-Only:
count(Number) Cannot be set alongside percent.percent(Number) Cannot be set alongside count.
Nested Schema for template
Read-Only:
-
boot_disk(Attributes) Parameters of a Node Nebius Compute Instance boot disk. (see below for nested schema) -
cloud_init_user_data(String, Sensitive) : cloud-init user-data Should contain at least one SSH key. -
filesystems(Attributes List) : Static attachments of Compute Filesystem. Can be used as a workaround, until CSI for Compute Disk and Filesystem will be available. (see below for nested schema) -
gpu_cluster(Attributes) Nebius Compute GPUCluster ID that will be attached to node. (see below for nested schema) -
gpu_settings(Attributes) : GPU-related settings.Inner value description
GPU-related settings. (see below for nested schema) -
local_disks(Attributes) : local_disks enables the provisioning of fast local drives. This type of storage is strictly ephemeral: on node restart, all data is erased, similar to RAM. (see below for nested schema) -
metadata(Attributes) (see below for nested schema) -
network_interfaces(Attributes List) (see below for nested schema) -
os(String) : OS version that will be used to create the boot disk of Compute Instances in the NodeGroup. Supported platform / Kubernetes version / OS / driver presets combinationsgpu-l40s-a,gpu-l40s-d,gpu-h100-sxm,gpu-h200-sxm,cpu-e1,cpu-e2,cpu-d3:drivers_preset:""version: 1.30 →"ubuntu22.04"version: 1.31 →"ubuntu22.04"(default),"ubuntu24.04"
gpu-l40s-a,gpu-l40s-d,gpu-h100-sxm,gpu-h200-sxm:drivers_preset:"cuda12"(CUDA 12.4)version: 1.30, 1.31 →"ubuntu22.04"
drivers_preset:"cuda12.4"version: 1.31 →"ubuntu22.04"
drivers_preset:"cuda12.8"version: 1.31 →"ubuntu24.04"
gpu-b200-sxm:drivers_preset:""version: 1.30, 1.31 →"ubuntu24.04"
drivers_preset:"cuda12"(CUDA 12.8)version: 1.30, 1.31 →"ubuntu24.04"
drivers_preset:"cuda12.8"version: 1.31 →"ubuntu24.04"
gpu-b200-sxm-a:drivers_preset:""version: 1.31 →"ubuntu24.04"
drivers_preset:"cuda12.8"version: 1.31 →"ubuntu24.04"
-
preemptible(Attributes) : Configures whether the nodes in the group are preemptible. Set to empty value to enable preemptible nodes. (see below for nested schema) -
reservation_policy(Attributes) : reservation_policy is an interface of the “capacity block” (or “capacity block group”) mechanism of Nebius Compute.Inner value description
ReservationPolicy is copied as-is from NebiusAPIcompute/v1/instance.proto. (see below for nested schema) -
resources(Attributes) Resources that will have Nebius Compute Instance where Node kubelet will run. (see below for nested schema) -
service_account_id(String) : the Nebius service account whose credentials will be available on the nodes of the group. With these credentials, it is possible to makenebiusCLI or public API requests from the nodes without the need for extra authentication. This service account is also used to make requests to container registry.resource.serviceaccount.issueAccessTokenpermission is required to use this field. -
taints(Attributes List) : Kubernetes Node taints. For now change will not be propagated to existing nodes, so will be applied only to Kubernetes Nodes created after the field change. That behaviour may change later. So, for now you will need to manually set them to existing nodes, if that is needed. Field change will NOT trigger NodeGroup roll out.Inner value description
See https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ (see below for nested schema)
Nested Schema for template.boot_disk
Read-Only:
-
block_size_bytes(Number) -
size_bytes(Number) Cannot be set alongside size_kibibytes, size_mebibytes or size_gibibytes. -
size_gibibytes(Number) Cannot be set alongside size_bytes, size_kibibytes or size_mebibytes. -
size_kibibytes(Number) Cannot be set alongside size_bytes, size_mebibytes or size_gibibytes. -
size_mebibytes(Number) Cannot be set alongside size_bytes, size_kibibytes or size_gibibytes. -
type(String) :Supported values
Possible values:UNSPECIFIEDNETWORK_SSDNETWORK_HDDNETWORK_SSD_IO_M3NETWORK_SSD_NON_REPLICATED
Nested Schema for template.filesystems
Read-Only:
-
attach_mode(String) :Supported values
Possible values:UNSPECIFIEDREAD_ONLYREAD_WRITE
-
existing_filesystem(Attributes) (see below for nested schema) -
mount_tag(String) Specifies the user-defined identifier, allowing to use it as a device in mount command.
Nested Schema for template.filesystems.existing_filesystem
Read-Only:
id(String)
Nested Schema for template.gpu_cluster
Read-Only:
id(String)
Nested Schema for template.gpu_settings
Read-Only:
-
drivers_preset(String) : Identifier of the predefined set of drivers included in the ComputeImage deployed on ComputeInstances that are part of the NodeGroup. Supported presets for different platform / Kubernetes version combinations:gpu-l40s-a,gpu-l40s-d,gpu-h100-sxm,gpu-h200-sxm:version: 1.30 →"cuda12"(CUDA 12.4)version: 1.31 →"cuda12"(CUDA 12.4),"cuda12.4","cuda12.8"
gpu-b200-sxm:version: 1.31 →"cuda12"(CUDA 12.8),"cuda12.8"
gpu-b200-sxm-a:version: 1.31 →"cuda12.8"
Nested Schema for template.local_disks
Read-Only:
-
config(Attributes) : config defines actions that managed Kubernetes service performs on mounted local disks to provide them inside Kubernetes cluster with a convenient interface.Inner value description
LocalDisksSpecConfig defines actions that managed Kubernetes service performs on mounted local disks to provide them inside Kubernetes cluster with a convenient interface. (see below for nested schema) -
passthrough_group(Attributes) : Requests passthrough local disks from the host. Topology of the provided disks is preserved during stop and start for every instance of a specific platform and preset in the region. (see below for nested schema)
Nested Schema for template.local_disks.config
Read-Only:
none(Boolean) none: “do nothing” - local disks will be provisioned as on a regular compute instance.
Nested Schema for template.local_disks.passthrough_group
Read-Only:
-
requested(Boolean) : Passthrough local disks from the underlying host. Devices are expected to appear in the guest as NVMe devices (nvme0, nvme1, …), but the exact number depends on the preset. Enabled only when this field is explicitly set.
Nested Schema for template.metadata
Read-Only:
-
labels(Map of String) : Kubernetes Node labels. Keys and values must follow Kubernetes label syntax: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ For now change will not be propagated to existing nodes, so will be applied only to Kubernetes Nodes created after the field change. That behavior may change later. So, for now you will need to manually set them to existing nodes, if that is needed. System labels containing “kubernetes.io” and “k8s.io” will be ignored. Field change will NOT trigger NodeGroup roll out.
Nested Schema for template.network_interfaces
Read-Only:
-
public_ip_address(Attributes) : Parameters for Public IPv4 address associated with the interface. Set to empty value, to enable it.Inner value description
Describes a public IP address. (see below for nested schema) -
subnet_id(String) : Nebius VPC Subnet ID that will be attached to a node cloud instance network interface. By default Cluster control plane subnet_id used. Subnet should be located in the same network with control plane.
Nested Schema for template.network_interfaces.public_ip_address
Nested Schema for template.preemptible
Nested Schema for template.reservation_policy
Read-Only:
-
policy(String) :Supported values
Possible values:-
AUTO:- Will try to launch instance in any reservation_ids if provided.
- Will try to launch instance in any of the available capacity block.
- Will try to launch instance in PAYG if 1 & 2 are not satisfied.
-
FORBID: The instance is launched only using on-demand (PAYG) capacity. No attempt is made to find or use a Capacity Block. It’s an error to provide reservation_ids with policy = FORBID -
STRICT:- Will try to launch the instance in Capacity Blocks from reservation_ids if provided.
- If reservation_ids are not provided will try to launch instance in suitable & available Capacity Block.
- Fail otherwise.
-
-
reservation_ids(List of String) Capacity block groups, order matters
Nested Schema for template.resources
Read-Only:
platform(String)preset(String)
Nested Schema for template.taints
Read-Only:
-
effect(String) :Supported values
Possible values:EFFECT_UNSPECIFIEDNO_EXECUTENO_SCHEDULEPREFER_NO_SCHEDULE
-
key(String) -
value(String)