For over a decade, Amazon Simple Storage Service (S3) has offered different levels of data storage classes to efficiently assist users in need of cloud-based storage infrastructure. Several additional storage classes have been added to increase the variety of use cases that S3 can support.
Today, there are seven major storage classes available in AWS S3, which are optimized based on the frequency of access, importance of data, and archiving needs for the storage solution. Amazon made recent changes to their popular Intelligent Tiering class so it’s worth revisiting AWS tiering.
Classes of AWS S3 Storage
These are the main storage classes available through S3:
- Standard (S3) – Used for frequently accessed data (hot data)
- Intelligent Tiering – Used for data with unknown or changing access patterns or uncertain need of access.
- Standard-Infrequent Access (S3 Standard-IA) – Used for infrequently accessed, long-lived data that needs to be retained but is not being actively used.
- One Zone Infrequent Access (S3 One Zone-IA) – Used for infrequently accessed data that’s long-lived but not critical enough to be covered by storage redundancies across multiple locations.
- Glacier – Used to archive infrequently accessed, long-lived data (cold data) Glacier has a latency of a few hours to retrieve data. Glacier has a latency of a few hours to retrieve data.
- Glacier Deep Archive – Used for data that is hardly ever or never accessed and for digital preservation purposes for regulatory compliance.
- Outposts – Used for data on-premises that has local data residency requirements or requires being close to on-premises applications for high performance reasons.
Simple lifecycle policies based on objects’ dates of creation can be implemented to move objects automatically to cheaper S3 storage classes to optimize costs. These policies can be used with the above storage classes. However, when the pattern of data access is less predictable or data is widely accessed by many applications, a more intelligent tiering model is often more cost-efficient.
How S3 Intelligent Tiering Works
For a monitoring fee, Amazon’s S3 Intelligent Tiering automatically moves objects between tiers within the service. When objects have not been accessed for a certain period of time, they are moved into the infrequent access tier. If they are accessed at a later point in time, they are then automatically moved back into the frequent access tier. Users can further choose to automatically send data to archive tiers that offer asynchronous access.
This type of unstructured data management strategy can help organizations save on storage costs mainly in environments where the frequency of data access is uncertain. But it may not always be the best choice of storage class if there is high confidence in access frequency eg via analytics.
Also, S3 Intelligent Tiering is a storage class and you cannot have different treatment for different tiers in the storage class. S3 Intelligent Tiering acts as a black-box – you move objects into it and cannot transparently access different tiers or set different versioning policies for the different tiers. You have to manipulate the entire S3 Intelligent Tier as a single bucket. For example, if you want to transition an object that has versioning enabled, then you have to transition all the versions. Also, when objects move to the archive tiers, the latency of access is much higher than the access tiers. Not all applications may be able to deal with the high latency.
When configured to automatically send data to archive storage classes, S3 Intelligent Tiering may require changes to existing workflows if access to archived data is required since it does not automatically restore data tiered to archives. Additionally, once sufficient time has passed such that the probability of access on archived data is low, data needs be transitioned to Glacier and Glacier Deep Archive storage classes using lifecycle policies. This will avoid paying recurring S3 Intelligent Tiering monitoring costs.
In contrast, you can intelligently tier data based on accurate access patterns for your custom data sets across all S3 storage classes including Glacier and Glacier Deep Archive with Komprise Intelligent Data Management for AWS tiering. You can also set different versioning and other policies for each tier or storage class with Komprise.
S3 Intelligent Tiering Pricing
The cost of Intelligent Tiering is based on how much of each type of storage is being used, how many requests are being made, and how many objects are being monitored. Amazon charges $0.0025 per 1,000 objects monitored. There is no retrieval charge incurred when objects are moved from tier to tier.
Updates to S3 Intelligent Tiering
In September 2021, AWS made two changes: Objects 128K or smaller no longer count toward the monitoring fee. These smaller objects are not eligible to be tiered and are always charged at the frequent access rate. Additionally, S3 intelligent tiering will no longer accrue pro-rated charges for objects deleted, transitioned, or overwritten within 30 days.
Advantages of S3 Intelligent Tiering
- Objects can be assigned a tier upon upload;
- No retrieval fees;
- No tiering fees;
- Objects are moved automatically to cheaper, appropriate tiers based on monitored access patterns;
- No operational overhead;
- No impact on performance;
- Designed for 99.999999999% durability and 99.9% availability over annual average.
Disadvantages of S3 Intelligent Tiering
- If access patterns are predictable, then lifecycle rules may be more cost-effective than Intelligent Tiering;
- It is not straightforward to identify objects that have been in the archive tiers for a long time so that these can be transitioned to Glacier and Glacier Deep Archive storage classes to avoid the S3 Intelligent Tiering monitoring fees;
- It is limited only to the S3, infrequent and archive tiers whereas some users may need to move data across EFS, FSX, S3 and Glacier storage classes for maximum efficiency;
- Policies to tier to archive tiers cannot be greater than two years;
- Objects smaller than 128KB are never moved from the frequent access tier;
- You cannot configure different policies for different groups or custom data sets, as it is an automated management solution that applies to entire buckets, prefixes or tagged data sets.
- Data tiering configurations need to be managed and configured for each bucket level instead of an account or global level for multiple buckets;
- You cannot set different versioning and backup policies for different tiers of S3 Intelligent Tiering; the policy applies to the entire bucket.
Alternatives to S3 Intelligent Tiering
As an AWS Advanced Tier partner, Komprise offers intelligent data management tools that can provide significant savings on AWS storage costs with strategies built from analytics-driven input. While AWS S3 Intelligent Tiering is optimized for unknowns, Komprise analyzes your data so you can make the optimal placement. Additionally, Komprise will retrieve objects from archive classes such as Glacier without the need for administrators to manually issue restore commands.
See how much you could save with the right data management platform providing in-depth insight into AWS storage efficiency. Get in touch with an expert at Komprise today for more information.
Learn more about Komprise for AWS tiering, AWS data migrations, AWS unstructured data management, and our AWS partnership here.