When you configure expiration rules, make sure that only the required objects and versions fall under these rules. Otherwise, you can unintentionally delete objects that must be preserved.
Web console
Nebius AI Cloud CLI
Terraform
In the web console, you can only add lifecycle rules to existing buckets. You cannot create a bucket with a lifecycle rule.To add an expiration lifecycle rule to a bucket:
In the web console, go to Storage → Object Storage.
Open the page of the required bucket.
Go to the Lifecycle rules tab.
Click Add rule.
(Optional) On the page that opens, specify the rule name. It helps identify the rule in a list of lifecycle rules.
Specify which objects the rule applies to:
Object key prefix: The rule applies to objects whose names begin with a specified prefix. For instance, if you set the expiring/ prefix, the rule applies to the expiring/test.txt object and does not apply to the test.txt object.
Minimum size, bytes: The rule applies to objects larger than the specified size.
Maximum size, bytes: The rule applies to objects smaller than the specified size.If you set both a minimum and maximum size, make sure that the maximum size is greater than or equal to the minimum one.
Set the actions that the lifecycle rule should perform:
Expiration: Delete objects and versions several days after the object upload or on a certain date. For example, you can delete objects five days after their upload.
Non-current version expiration: If you enabled versioning in your bucket, you can delete all versions except for the current one. To make them expire several days after the object upload, specify the required number of days.
Abort of incomplete multipart upload: Remove multipart uploads that fail and remain stale. If they take longer than the specified number of days, Object Storage aborts multipart uploads and deletes all the data uploaded.
Click Add rule.
When you update the settings of a bucket, make sure to list all lifecycle rules, even the ones that you do not change. If you do not include some rules, Object Storage will delete them.When you reconfigure a rule, specify only the parameters that you want to change. For example, if you earlier set a filter of objects in a rule and you want to change an expiration condition, specify only the condition, and the rule name or ID. You do not need to set the filter again as it will be preserved.To create an expiration lifecycle rule:
The rules are specified in the spec.lifecycle_configuration.rules parameter. If you want to preserve them when adding a new rule, duplicate those rules in the update command.
To configure a new lifecycle rule, run one of the following commands:
In the --id parameter, specify the bucket ID. To get it, run nebius storage bucket get-by-name --name <bucket_name>.
Every rule contains the following fields:
id: Rule ID (string). Helps identify the rule in the list of lifecycle rules.
status: Whether the rule is enabled or disabled. Use the DISABLED status to turn the rule off but preserve it in the list.
To set the rule conditions, use the examples below and insert them into the command instead of <rule_conditions>. You can combine and adapt these examples to your needs.
To delete objects five days after their upload, specify:
"expiration": { "days": 5}
To delete objects whose names begin with a certain prefix, specify:
For instance, if you set the expiring/ prefix, the expiring/test.txt object will be deleted five days after the upload, and the test.txt object will not.
The service applies the rule five days after the object upload.
If you enabled versioning in your bucket, you can delete outdated versions. These are often all the versions except for the current one. To make them expire five days after they are no longer current:
The versions will expire five days after they are no longer current.
If you ran multipart uploads for large objects and they failed and remained stale, abort these uploads and delete all the data uploaded. For example, to delete stale multipart uploads seven days after you initiated them (that is, after executing the CreateMultipartUpload method):
default_storage_class: Default storage class. It applies to objects if a storage class is not set for them explicitly during the upload.
versioning_policy: Enables versioning of objects in the bucket. For more information, see Object versioning in a bucket.
lifecycle_configuration.rules: List of lifecycle rules. Every rule contains the following parameters:
id: Rule ID (string). Helps identify the rule in the list of lifecycle rules.
status: Whether the rule is enabled or disabled. Use the DISABLED status to turn the rule off but preserve it in the list.
To set the rule conditions, use the examples below and insert them into the configuration file instead of <rule_conditions>. You can combine and adapt these examples to your needs.
To delete objects five days after their upload, specify:
expiration = { days = 5}
To delete objects whose names begin with a certain prefix, specify:
For instance, if you set the expiring/ prefix, the expiring/test.txt object will be deleted five days after the upload, and the test.txt object will not.
The service applies the rule five days after the object upload.
If you enabled versioning in your bucket, you can delete outdated versions. These are often all the versions except for the current one. To make them expire five days after they are no longer current:
The versions will expire five days after they are no longer current.
If you ran multipart uploads for large objects and they failed and remained stale, abort these uploads and delete all the data uploaded. For example, to delete stale multipart uploads seven days after you initiated them (that is, after executing the CreateMultipartUpload method):
Before you configure a transition rule, check your quotas in the project. Make sure that they are sufficient for switching a storage class.
Web console
Nebius AI Cloud CLI
Terraform
In the web console, you can only add lifecycle rules to existing buckets. You cannot create a bucket with a lifecycle rule.To add a transition lifecycle rule to a bucket:
In the web console, go to Storage → Object Storage.
Open the page of the required bucket.
Go to the Lifecycle rules tab.
Click Add rule.
(Optional) On the page that opens, specify the rule name. It helps identify the rule in a list of lifecycle rules.
Specify which objects the rule applies to:
Object key prefix: The rule applies to objects whose names begin with a specified prefix. For instance, if you set the expiring/ prefix, the rule applies to the expiring/test.txt object and does not apply to the test.txt object.
Minimum size, bytes: The rule applies to objects larger than the specified size.
Maximum size, bytes: The rule applies to objects smaller than the specified size.If you set both a minimum and maximum size, make sure that the maximum size is greater than or equal to the minimum one.
To create a transition lifecycle rule, enable the Modify storage class action.
Select the rule type:
Upload: Set the number of days after the object upload. When this period expires, the storage class of the object changes.
Last access: Set the number of days after the last access to the object. When this period expires, the storage class changes.
You can set filters to define what actions are considered the last access to the object. These filters apply to all last-access lifecycle rules.
Date: Set a specific date when you need to switch the storage class.
Select a new storage class that the lifecycle rule applies to.
Click Add rule.
When you update the settings of a bucket, make sure to list all lifecycle rules, even the ones that you do not change. If you do not include some rules, Object Storage will delete them.When you reconfigure a rule, specify only the parameters that you want to change. For example, if you earlier set a filter of objects in a rule and you want to change an expiration condition, specify only the condition, and the rule name or ID. You do not need to set the filter again as it will be preserved.To create a transition lifecycle rule:
The rules are specified in the spec.lifecycle_configuration.rules parameter. If you want to preserve them when adding a new rule, duplicate those rules in the update command.
To configure a new transition lifecycle rule, run one of the following commands:
--id: Bucket ID. To get it, run nebius storage bucket get-by-name --name <bucket_name>.
--default-storage-class: Default storage class. It applies to objects if a storage class is not explicitly set for them during the upload.
--lifecycle-configuration-rules: List of lifecycle rules. Every rule contains the following fields:
id: Rule ID (string). Helps identify the rule in the list of lifecycle rules.
status: Whether the rule is enabled or disabled. Use the DISABLED status to turn the rule off but preserve it in the list.
transition: Conditions for a transition lifecycle rule. To change a storage class, set the storage_class parameter. Use either the days or days_since_last_access parameter:
storage_class: New storage class.
days: The rule applies the specified number of days after the object upload.
days_since_last_access: The rule applies the specified number of days after the object has not been accessed.The service considers an object to be accessed if its data was retrieved or copied by using S3 methods, such as GetObject, HeadObject, GetObjectTagging, CopyObject or UploadPartCopy. You can configure the rule to refer only to some of these methods by using last access filters.
filter (optional): Filters of objects. By using them, you can apply a rule to certain objects only. You can combine the filters if needed. The available filters are the following:
To transition objects whose names begin with a certain prefix, specify:
"filter": { "prefix": "expiring/"}
For instance, if you set the expiring/ prefix, the storage class of the expiring/test.txt object will be switched. However, this will not happen to the test.txt object.
default_storage_class: Default storage class. It applies to objects if a storage class is not explicitly set for them during the upload.
versioning_policy: Enables versioning of objects in the bucket. For more information, see Object versioning in a bucket.
lifecycle_configuration.rules: List of lifecycle rules. Every rule contains the following parameters:
id: Rule ID (string). Helps identify the rule in the list of lifecycle rules.
status: Whether the rule is enabled or disabled. Use the DISABLED status to turn the rule off but preserve it in the list.
transition: Conditions for a transition lifecycle rule. To change a storage class, set the storage_class parameter. Use either the days or days_since_last_access parameter:
storage_class: New storage class.
days: The rule applies the specified number of days after the object upload.
days_since_last_access: The rule applies the specified number of days after the object has not been accessed.The service considers an object to be accessed if its data was retrieved or copied by using S3 methods, such as GetObject, HeadObject, GetObjectTagging, CopyObject or UploadPartCopy. You can configure the rule to refer only to some of these methods by using last access filters.
filter (optional): Filters of objects. By using them, you can apply a rule to certain objects only. You can combine the filters if needed. The available filters are the following:
To transition objects whose names begin with a certain prefix, specify:
filter = { prefix = "expiring/"}
For instance, if you set the expiring/ prefix, the storage class of the expiring/test.txt object will be switched. However, this will not happen to the test.txt object.
To transition objects that exceed 5120 bytes:
filter = { object_size_greater_than_bytes = 5120}
To transition objects that are less than 128 bytes:
(Optional) Filters for last-access lifecycle rules
You can filter actions that Object Storage considers or ignores when it determines the timestamp of the object’s last access. In the filters, specify S3 methods and user agents (such as rclone) that should be included or excluded.Filters apply to all last-access lifecycle rules. You configure them at the bucket level.By default, no filters apply. All and user agents are included.
Web console
Nebius AI Cloud CLI
Terraform
In the web console, go to Storage → Object Storage.
Open the page of the required bucket.
Go to the Settings tab.
In the Filters for last-access lifecycle rules section, click Add filter.
In the Filter type field, select one of the following:
Include: Consider only the methods and user agents listed in the filter.
Exclude: Ignore the methods and user agents listed.
Specify the S3 methods and user agents for the filter. Among the S3 methods, you can set GetObject, HeadObject, GetObjectTagging, CopyObject and UploadPartCopy.If you do not specify methods or user agents, Object Storage considers all of them.
Click Add filter.The new filter appears in the list of filters for last-access lifecycle rules.
Click Save changes to confirm.
When you create or update a bucket, set the filters by using the --lifecycle-configuration-last-access-filter-conditions parameter.
Apply the filtering parameter only if you use the transition.days_since_last_access parameter.For example, the service can consider an object to be accessed only if the GetObject and CopyObject methods were executed by using the rclone user agent. Alternatively, the service can ignore the GetObject and UploadPartCopy methods.Every filter contains the following fields:
type: Whether listed S3 methods and user agents should be considered to determine if the object is accessed. Possible values are the following:
INCLUDE: Consider only the methods and user agents listed in the filter.
EXCLUDE: Ignore the methods and user agents listed.
methods: List of S3 methods, such as "GET_OBJECT", "HEAD_OBJECT", "GET_OBJECT_TAGGING", "COPY_OBJECT" and "UPLOAD_PART_COPY".If you do not specify the methods parameter, Object Storage considers all methods.
user_agents: List of substrings that user agents (clients addressing an object) can contain. For instance, if you specify rclone, only the user agents with the rclone substring trigger the countdown of days for the rule to apply.If you do not specify the user_agents parameter, Object Storage considers all user agents.
Specify at least methods or user_agents. You can use both of them as well, but you cannot use a filter with the type parameter only.
When you create or update a bucket, set the filters by using the last_access_filter.conditions parameter.
Apply the filtering parameter only if you use the transition.days_since_last_access parameter.For example, the service can consider an object to be accessed only if the GetObject and CopyObject methods were executed by using the rclone user agent. Alternatively, the service can ignore the GetObject and UploadPartCopy methods.Every filter contains the following fields:
type: Whether listed S3 methods and user agents should be considered to determine if the object is accessed. Possible values are the following:
INCLUDE: Consider only the methods and user agents listed in the filter.
EXCLUDE: Ignore the methods and user agents listed.
methods: List of S3 methods, such as "GET_OBJECT", "HEAD_OBJECT", "GET_OBJECT_TAGGING", "COPY_OBJECT" and "UPLOAD_PART_COPY".If you do not specify the methods parameter, Object Storage considers all methods.
user_agents: List of substrings that user agents (clients addressing an object) can contain. For instance, if you specify rclone, only the user agents with the rclone substring trigger the countdown of days for the rule to apply.If you do not specify the user_agents parameter, Object Storage considers all user agents.
Specify at least methods or user_agents. You can use both of them as well, but you cannot use a filter with the type parameter only.
Below are examples of how to create a bucket with a transition lifecycle rule that triggers if an object was not accessed for a long time.
Switch a storage class to Standard if the object has not been accessed in seven days. For the access, do not consider the GetObject and CopyObject methods that were made by using rclone.