Skip to main content
Nebius AI Cloud supports integration with dstack. This is an open-source container orchestrator for AI workload management. It is a streamlined alternative to Slurm and Kubernetes, and designed especially for AI. For example, by using dstack you can develop, train and deploy AI models. To get started with dstack, install its server on your local machine. After that, you can create and deploy dstack resources, such as tasks or services.
If you do not want to use your local machine, you can create a Compute virtual machine and then connect to it.

Costs

If you install dstack on your local machine, Nebius AI Cloud does not charge for the resources required for the dstack server installation. If you use a VM, see the Compute pricing.

Steps

Prepare a service account

To configure access to Nebius AI Cloud for the dstack server:
  1. Create a service account.
  2. Add it to a group that has at least the editor role within your tenant; for example, the default editors group.
  3. Upload an authorized key to the created service account:
    1. In the sidebar, go to https://mintcdn.com/nebius-ai-cloud/1Ha0sWR6e1mnIaHS/_assets/sidebar/administration.svg?fit=max&auto=format&n=1Ha0sWR6e1mnIaHS&q=85&s=e6411dc023fd6972922c0a12a59ccf21 Administration → IAM.
    2. Go to the Service accounts tab.
    3. Open the created service account’s page.
    4. Go to the Authorized keys tab and then click https://mintcdn.com/nebius-ai-cloud/1Ha0sWR6e1mnIaHS/_assets/arrow-up-to-line.svg?fit=max&auto=format&n=1Ha0sWR6e1mnIaHS&q=85&s=5ed27f4ff211ee66d1ee185f2af2955e Upload authorized key.
    5. Generate the key:
      openssl genrsa -out private.pem 4096 && \
      openssl rsa -in private.pem -outform PEM -pubout \
      -out public.pem
      
    6. In the web console, attach the generated public.pem file to the service account.
    7. (Optional) Specify the date when the key should expire.
    8. Click the Upload key button.
    After that, the key appears in the list of authorized keys.

Create a configuration file for the dstack server

  1. Create the ~/.dstack/server/ directory and go into it:
    mkdir -p ~/.dstack/server
    cd ~/.dstack/server
    
  2. Create the following config.yml configuration file:
    projects:
    - name: main
      backends:
      - type: nebius
        creds:
          type: service_account
          service_account_id: serviceaccount-***
          public_key_id: publickey-***
          private_key_file: <path/to/private.pem>
    
    Specify the following parameters:
    • service_account_id: ID of the created service account. You can copy the ID from the Service accounts page.
    • public_key_id: ID of the uploaded authorized key. To copy the ID, go to the created service account’s page and open the Authorized keys tab.
    • private_key_file: Path to the private.pem file. It was generated as part of the authorized key.

Deploy the configuration file and run the server

  1. Install Python version 3.10 or higher.
  2. Install dstack:
    pip3 install "dstack[nebius]" -U
    
    If you receive error: externally-managed-environment, create a virtual Python environment and run this command there. As a result, you install the package isolated from the basic environment. Alternatively, run the installation command with the --break-system-packages parameter. This option is not as secure as a virtual environment, but it can be useful when you work in a dedicated directory with dstack commands. The launch-server command runs in a separate terminal tab in the background until you interrupt it. As a result, you may need to have dstack commands available in all the tabs, not only in the tab with the virtual environment.
    1. Create it:
      python3 -m venv <environment_name>
      
    2. Activate the environment:
      source <environment_name>/bin/activate
      
      A directory with the environment name is created.
    Now, you can install required Python packages. When you no longer need the created virtual environment, run the deactivate command and delete the environment directory.
  3. Run the dstack server:
    dstack server
    
    ╱╱╭╮╱╱╭╮╱╱╱╱╱╱╭╮
    ╱╱┃┃╱╭╯╰╮╱╱╱╱╱┃┃
    ╭━╯┣━┻╮╭╋━━┳━━┫┃╭╮
    ┃╭╮┃━━┫┃┃╭╮┃╭━┫╰╯╯
    ┃╰╯┣━━┃╰┫╭╮┃╰━┫╭╮╮
    ╰━━┻━━┻━┻╯╰┻━━┻╯╰╯
    ╭━━┳━━┳━┳╮╭┳━━┳━╮
    ┃━━┫┃━┫╭┫╰╯┃┃━┫╭╯
    ┣━━┃┃━┫┃╰╮╭┫┃━┫┃
    ╰━━┻━━┻╯╱╰╯╰━━┻╯
    
    [12:34:23] INFO     Applying ~/.dstack/server/config.yml...                                                                                                   
    [12:34:27] INFO     dstack._internal.server.services.plugins:77 Found not enabled builtin plugin rest_plugin. Plugin will not be loaded.                      
               INFO     Configured the main project in ~/.dstack/config.yml                                                                                       
               INFO     The admin token is ******                                                                                   
               INFO     The dstack server 0.19.17 is running at http://127.0.0.1:3000
    
    You can open the dstack web interface by using the provided address and token.
    The command does not finish and keeps the server running until you interrupt the command.
  4. Create a dedicated directory to work with dstack:
    mkdir <path/to/new/directory>
    cd <path/to/new/directory>
    
  5. Initialize the server:
    dstack init
    
Now, you can start orchestrating your AI workloads.

What’s next

To manage AI workloads, you can configure and operate with dstack resources. They allow you to deploy AI models and optimize usage of cloud resources. For more information, see dstack documentation for those resources:

See also