To deploy an AI model, create an endpoint. Serverless AI endpoints are based on containers over virtual machines (VMs) in Compute. Your model runs in a container over VM, and you can access the model by using the endpoint.
Web console
CLI
In the sidebar, go to AI Services → Endpoints.
Click Create endpoint.
Specify the endpoint name.
In the Endpoint settings section, specify the image path to the container image.If you use a private registry, click Add registry and provide the details for your registry.
Set the container ports for the endpoint. You can add multiple ports.
(Optional) Configure advanced settings:
Entrypoint command: Specify an entrypoint command for the container.
Arguments: Override container arguments that are passed to the entrypoint.
Environment variables: Specify environment variables in key-value pairs.
SSH key: Add an SSH key for the VM’s user so you can connect to the VM.
Authentication: If the endpoint serves production traffic, enable token authentication. The system generates a token. Copy and save the token securely before proceeding. If you are prototyping or testing, you can leave authentication disabled.
(Optional) Configure the Computing resources section:
Select whether the VM should have GPUs.
Specify the VM type: regular or preemptible.VMs without GPUs only support the regular type.
Attach a bucket or a filesystem to provide storage. You can create a bucket or filesystem, or use an existing one. To create a new bucket, see Bucket parameters. To create a new filesystem, see Volume parameters.
Configure the Network section:
Select a subnet or create a new one.
Select the IP address type: Public static IP or Private IP. If you want to connect to the endpoint from the internet, select Public static IP.
--image: Container image reference in the registry/path:tag or registry/path@digest format. Use an image from a public registry or your authenticated private registry.
--registry-username, --registry-password (optional): Credentials to authenticate if you pull an image from a private registry. Alternatively, use --registry-secret for credentials stored in MysteryBox.
--registry-username: Username.
--registry-password: Personal access token, password or an API key. Depends on where your registry is hosted. It can be Docker Hub, Microsoft Azure, GitHub, NVIDIA or a custom registry.
If you pull an image from a public registry or from Container Registry in the same project, you don’t need to specify credentials.
--registry-secret (optional): MysteryBox secret selector with REGISTRY_USERNAME and REGISTRY_PASSWORD payload keys. You can specify a secret name, secret ID, version ID or a combined secret/version selector such as mbsec-e00***@mbsecver-e00***.
--container-command (optional): Entrypoint command for the container.
--args (optional): Arguments for docker run to pass to the entrypoint command.
--env (optional): Environment variables for the container. Set them in the key=value format where the key is the environment variable and the value is the value of this variable. If you need to set several variables, list the key=value pairs separated by commas.
--env-secret (optional): Environment variables loaded from a MysteryBox secret in the key=value format. The value can be a secret name, secret ID, version ID or a combined secret/version selector such as mbsec-e00***@mbsecver-e00***. If you need to set several variables, list the pairs separated by commas.
--container-port (optional): Port that the endpoint exposes.
--auth (optional): Authentication method.If the parameter isn’t set (default), no authentication is required. Useful when you want to create an endpoint prototype and test it.If you set --auth token, you enable authentication. Useful for production purposes. When you call the endpoint, specify the token in the "Authorization: Bearer <token>" HTTP header. Use --token or --token-secret to configure the token. If you don’t provide either, the CLI generates a random token.
--token (optional): Token for authentication. To generate one manually, run openssl rand -hex 32. If you don’t provide --token, the CLI generates one for you.
--token-secret (optional): MysteryBox secret selector with the AUTH_TOKEN payload key. You can specify a secret name, secret ID, version ID or a combined secret/version selector such as mbsec-e00***@mbsecver-e00***.
--volume (optional): Bucket or shared filesystem to mount to the endpoint container. You can use volumes to store model files and other endpoint artifacts.Specify the value in either format:
source:container_path[:mode] for mounting Nebius shared filesystems and existing bucket or volume resources by ID or name.
s3://bucket:/container_path[:mode[:profile]] for mounting an Object Storage bucket with AWS profile credentials or S3 credentials stored in MysteryBox. The profile is the AWS credentials profile to use. If you manage your credentials with MysteryBox, use profile@<secret_selector>, where <secret_selector> is a secret name, secret ID, version ID or a combined secret/version selector such as mbsec-e00***@mbsecver-e00***
The supported modes are ro, read only, and rw, read-write (default). Repeat for multiple volumes. For example:
--preset: Number of GPUs, vCPUs and RAM allocated to the container. The preset must match the selected platform. See available presets in Presets for GPU platforms.
--disk-size: Disk size of the container over VM. Specify the value such as 100Gi, 500Gi or 1Ti. The default value is 250Gi.See how disk performance depends on disk size.
--shm-size (optional): Shared memory size of /dev/shm. Specify the value such as 64Mi, 128Mi or 1Gi. The default value is 16Gi.
--ssh-key (optional): SSH key to access the container over VM by SSH. When you add an SSH key, a public dynamic IP address is assigned. Before you add the key, check the quota on the number of public IP addresses in the web console.
--public (optional): Assigns a public IP address to the container over VM. Required if you want to connect to the endpoint from the internet.
The endpoint creation takes approximately five minutes.
You can call an endpoint when you want to interact with an AI model hosted in this endpoint; for example, when you want to chat with the model.To call the endpoint:
Get the endpoint IP address:
Web console
CLI
In the sidebar, go to AI Services → Endpoints.
Open the page of the required endpoint.
In the Network section, copy the IP address from the Public endpoints or Private endpoints field.
To get the endpoint ID, list all endpoints:
nebius ai endpoint list
In the output, copy the ID of the required endpoint.
Get the endpoint IP address:
nebius ai endpoint get <endpoint_ID> \ --format json | jq -r '.status.instances[0].public_ip'
Call the endpoint by using an HTTP client. For example, with curl:
<endpoint_IP_address>: IP address that you copied earlier.
<token>: Authentication token that you specified when you created the endpoint. If you didn’t specify any token, don’t use the Authorization HTTP header.
model: AI model that is hosted in the endpoint and that you chat with.
content: Message that you want to send to the model.
The response looks like the following:
{ "choices": [ { "message": { "role": "assistant", "content": "Why did the AI cross the road? Because it learned the optimal path after 10,000 epochs." } } ]}
If you don’t currently need your endpoint but you want to preserve it, you can stop the endpoint and then start it later. Computing resources of stopped endpoints aren’t charged. However, if you mounted a volume to the endpoint, you are charged for the volume even if the endpoint is stopped.
Web console
CLI
In the sidebar, go to AI Services → Endpoints.
Locate the endpoint and then click → Stop or Start.
In the window that opens, confirm the action.
To get the endpoint ID, list all endpoints:
nebius ai endpoint list
In the output, copy the ID of the required endpoint.
When you delete an endpoint, Serverless AI automatically deletes its VM and container (boot) disk.If you no longer need the endpoint, delete it:
Web console
CLI
In the sidebar, go to AI Services → Endpoints.
Locate the endpoint and then click → Delete.
In the window that opens, confirm the deletion.
To get the endpoint ID, list all endpoints:
nebius ai endpoint list
In the output, copy the ID of the required endpoint.
Delete the endpoint:
nebius ai endpoint delete --id <endpoint_ID>
If a static IP address is assigned to the endpoint, the prompt asks to confirm the address release. Enter y to confirm or n to keep the address allocated in a pool.