Skip to main content
This Nebius AI Cloud reference shows you how to set up and configure AWS CLI and how to use it to work with buckets and objects complemented with useful options you can apply.

Configuring AWS CLI

This section explains how to create a service account, grant it access to manage your resources and set up the AWS CLI to perform actions on the service account’s behalf. This section also covers other settings that connect the AWS CLI with Nebius AI Cloud and Object Storage.
  1. Create a service account and save its ID to an environment variable:
    export NB_SA_ID=$(nebius iam service-account create \
      --name object-storage-sa --format json \
      | jq -r '.metadata.id')
    
  2. Grant edit access to the service account:
    1. Get the tenant ID:
      export NB_PROJECT_ID=$(nebius config get parent-id)
      export NB_TENANT_ID=$(nebius iam project get $NB_PROJECT_ID --format jsonpath='{.metadata.parent_id}')
      
    2. Get the ID of the default editors group:
      export NB_EDITORS_GROUP_ID=$(nebius iam group get-by-name \
        --name editors --parent-id $NB_TENANT_ID \
        --format jsonpath='{.metadata.id}')
      
    3. Add the service account to editors group:
      nebius iam group-membership create \
        --parent-id $NB_EDITORS_GROUP_ID \
        --member-id $NB_SA_ID
      
  3. Create an access key for the service account and get its AWS-like ID and contents:
    export NB_ACCESS_KEY_ID=$(nebius iam access-key create \
      --account-service-account-id $NB_SA_ID \
      --description 'AWS CLI' \
      --format json | jq -r '.resource_id')
    export NB_ACCESS_KEY_AWS_ID=$(nebius iam access-key get-by-id \
      --id $NB_ACCESS_KEY_ID \
      --format json | jq -r '.status.aws_access_key_id')
    export NB_SECRET_ACCESS_KEY=$(nebius iam access-key get-secret-once \
      --id $NB_ACCESS_KEY_ID --format json \
      | jq -r '.secret')
    
  4. Add the key to the AWS CLI configuration:
    aws configure set aws_access_key_id $NB_ACCESS_KEY_AWS_ID
    aws configure set aws_secret_access_key $NB_SECRET_ACCESS_KEY
    
  5. Depending on your project region, add the Nebius AI Cloud region ID and the Object Storage endpoint URL to the AWS CLI configuration:
    aws configure set region <region_ID>
    aws configure set endpoint_url https://storage.<region_ID>.nebius.cloud
    
    For example, run the following commands for a project in eu-north1:
    aws configure set region eu-north1
    aws configure set endpoint_url https://storage.eu-north1.nebius.cloud
    

Working with buckets

In Object Storage, you store your files in containers called buckets.

Create a bucket

Use the s3 mb (“make bucket”) command to create a bucket:
aws s3 mb s3://<bucket_name>
Run:
aws s3 mb s3://example-bucket
Output:
make_bucket: example-bucket

List buckets on your cluster

Use the s3 ls (“LiSt”) command to see all the available buckets on your Object Storage cluster:
aws s3 ls

Delete a bucket

Use the s3 rb (“remove bucket”) command to delete an empty bucket:
aws s3 rb s3://<bucket_name>
A bucket you want to delete must be completely empty. If you still have objects in your bucket, use the --force option to remove a non-empty bucket:
aws s3 rb s3://<bucket_name> --force
Run:
aws s3 rb s3://example-bucket
Output:
remove_bucket: example-bucket

Working with objects

In Object Storage, you work with files and folders called objects.

Upload objects

The AWS CLI does not support uploading objects to a storage class that differs from the bucket’s default. To do that, use s5cmd. For details, see How to upload.
You can choose to upload a single file or a whole folder containing any number of files:
Use the s3 cp (“CoPy”) command to upload a specified local file to your Object Storage bucket with a specified prefix:
aws s3 cp <source/path> s3://<bucket_name>/<object_key>
Run:
aws s3 cp lorem-ipsum/lorem.txt s3://example-bucket/lorem-ipsum/
Output:
upload: lorem-ipsum/lorem.txt to s3://example-bucket/lorem-ipsum/lorem.txt

List objects in your bucket

Use the s3 ls (“LiSt”) command with a <bucket_name> and a [<prefix_for_object_keys>] to see all the available buckets on your Object Storage cluster:
aws s3 ls s3://<bucket_name>/[<prefix_for_object_keys>] --recursive --human-readable --summarize
The options used in the command above are the following:
  • --recursive keeps listing objects in the bucket under all available object keys until all have been listed.
  • --human-readable clearly defines data capacity units and rounds them up if necessary.
  • --summarize below the list of objects, shows the total number and the total size of all the objects in the bucket.
In this example, your Object Storage example-bucket contains three text files: lorem.txt, euismod.txt and litora.txt under the object key lorem-ipsum/.Run:
aws s3 ls s3://example-bucket/ --recursive --human-readable --summarize
Output:
2024-09-05 16:35:34  458 Bytes lorem-ipsum/euismod.txt
2024-09-05 16:35:35  442 Bytes lorem-ipsum/litora.txt
2024-09-05 16:35:35  505 Bytes lorem-ipsum/lorem.txt

Total Objects: 3
   Total Size: 1.4 KiB

Move objects between buckets

Use the s3 mv (“MoVe”) command to copy objects with a specified prefix from the source bucket to another destination bucket with a new prefix within your Object Storage cluster, then remove the objects at the origin:
aws s3 mv s3://<origin_bucket_name>/[<prefix_for_object_keys>]/ s3://<target_bucket_name>/[<prefix_for_object_keys>]/ --recursive
The --recursive option used in the command above keeps listing objects in the bucket under all available object keys until all have been listed.
In this example, your Object Storage example-bucket contains three text files: lorem.txt, euismod.txt and litora.txt under the object key lorem-ipsum/. You need to move these objects to another-example-bucket under the prefix lorem-ipsum-mv/.Run:
aws s3 mv s3://example-bucket/lorem-ipsum/ s3://another-example-bucket/lorem-ipsum-mv --recursive
Output:
move: s3://example-bucket/lorem-ipsum/litora.txt to s3://another-example-bucket/lorem-ipsum-mv/litora.txt
move: s3://example-bucket/lorem-ipsum/lorem.txt to s3://another-example-bucket/lorem-ipsum-mv/lorem.txt
move: s3://example-bucket/lorem-ipsum/euismod.txt to s3://another-example-bucket/lorem-ipsum-mv/euismod.txt

Download objects

Use the s3 cp (“CoPy”) command to download a specified object with a prefix from your Object Storage bucket to a destination on your local machine:
aws s3 cp s3://<bucket_name>/<object_key> <local/destination/path/>
You can set a different name for a file you want to download in <local/destination/path/>.
In this example, you’ll download the lorem.txt file from the Object Storage example-bucket to the local example-download/ folder as lorem-download.txt.Run:
aws s3 cp s3://example-bucket/lorem-ipsum/lorem.txt example-download/lorem-download.txt
Output:
download: s3://quickstart-bucket/lorem-ipsum/lorem.txt to example-download/lorem-download.txt
If the name of your local folder contains spaces, put it in single quotation marks. For example: Documents/'My ML configurations'/.

Delete objects

Use the s3 rm (“ReMove”) command to delete objects from your Object Storage bucket.
aws s3 cp s3://<bucket_name>/<object_key>

Useful options for AWS CLI commands

This section describes useful options you can use with most of the commands listed above.

Include

When you use the s3 cp, s3 sync, s3 mv or s3 rm command, you can apply the --include option to set the rule to only include options specified. For example, your Object Storage bucket contains images and text files, but you want to download text files only. Set the rule as follows:
aws s3 cp s3://<bucket_name>/[<prefix_for_object_keys>] <local/destination/path/> \
    --recursive \
    --include "*.txt"
In the command above:
  • --recursive copies all the objects available at the source.
  • --include "*.txt" applies the command only to .txt files. Note that you need to use multiple --include flags to define more than one condition.

Exclude

When you use the s3 cp, s3 sync, s3 mv or s3 rm command, you can apply the --exclude option to set the rule to include all the files except the options specified. For example, your Object Storage bucket contains images in .png and .jpg formats together with text files, but you want to download text files only. Set the rule as follows:
aws s3 cp s3://<bucket_name>/[<prefix_for_object_keys>] <local/destination/path/> \
    --recursive \
    --exclude "*.png" --exclude "*.jpg"
In the command above:
  • --recursive copies all the objects available at the source.
  • --exclude "*.png" --exclude "*.jpg" applies the command to all file formats except .png and .jpg. Note that you need to use multiple --exclude flags to define more than one condition.