Anthony Fiore is a Storage Specialist Solution Architect at AWS. His career in IT spans various infrastructure disciplines, including most recently as the lead cloud engineer at Tiffany & Company. We chatted with Anthony about storage trends in the cloud.
How should a customer begin the journey of migrating data to AWS?
We have three big buckets of storage at AWS: block, file and object. We work to understand where customers are today and their current needs and goals. Generally, they are looking to lift and shift data and run it in AWS just the same as before, which is helpful when closing a data center. Usually, in six to 12 months we start to talk about modernization and optimization. For example, if you are running a Windows File Server on Amazon EC2 you can use Amazon FSx for Windows. Customers love it because there’s no need to patch or maintain the file system. Some customers decide to use a managed file system right out of the gate on Amazon Web Services. Amazon FSx for NetApp ONTAP is also very popular with customers right now.
We are hearing a lot more about file storage in the cloud lately. When would a customer want to pursue a managed file service like Amazon FSx versus Amazon S3?
Customers often want to look at object storage like Amazon S3 but it’s tough to refactor their applications which is usually required. Typically, they will choose a medium like Komprise to bridge the gap between file and object storage. Otherwise, they are more constrained, and need to select file or object. They’ll look at their latency requirements and cost always comes up. Customers have a semblance of what they need but it’s not fully fleshed out. We can help them understand what they should be asking their businesspeople or security and IT folks as to what is important to them. Firstly, they need to determine if the on-premises application can even talk to object and usually it can’t. That’s where Komprise helps!
Watch the short video interview with Anthony:
What are the common use cases for object versus file in AWS?
For Amazon S3, the easy one is for backup storage. Another big one is data lakes. In Amazon S3, customers can store the data and use our analytics and ML services on top of it. We often talk about unlocking the value of the data. What if we aggregated data with other data sets and then we can begin to make business decisions on it. Amazon S3 is also great for cold data storage. Amazon S3 Glacier Deep Archive and Amazon S3 Glacier Instant Retrieval are great archival classes for cheap storage of data which customers don’t plan to access often. With file storage, we have a lot of different options for customers.
From robust offerings like Amazon FSx for NetApp ONTAP, to Windows Native services like Amazon FsX for Windows, we have a file storage offering for every use case now. Some popular use cases we see are file sharing and collaboration, highly transactional workloads and persistent storage for containers.
Is the notion of cloud native access and value becoming more important?
If customers can have their cake and eat it too by having data on inexpensive storage on Amazon S3 but still access it in fairly real-time fashion when they want to, it’s a game changer. This opens us up to the art of possible. Twenty years ago, you’d archive data to tape and it sat there until someone says they wants to recover it. What I love about Komprise is the fact that everything is by default transferred to Amazon S3 in native file format and you can work with it in a data lake or run queries and you don’t need to use Komprise to get it. As part of a migration strategy, this is a powerful win-win message.
What’s involved if you want to move data between storage tiers on AWS?
First, you need to understand your data access patterns. We have seven Amazon S3 storage classes. The cost of storage gets lower as you get to cold storage such as Amazon S3 Glacier Flexible Retrieval but the cost of operations—from API or retrievals—goes up because Amazon S3 Glacier Flexible Retrieval is for archival. We want to get customers to the right storage tier based on their access patterns–but how many customers really know this, especially with the data sprawl from unstructured data?
Komprise is valuable here by quickly analyzing all the customers data to show when files were last accessed so they can make the right business decisions on data storage. Their data can be in the optimal AWS storage from the very beginning and as things change over time, Komprise can help them execute on that too.
What are the common migration challenges for enterprises and how does AWS help its big customers?
Migration is one of our top initiatives. The AWS Migration Acceleration Program delivers an assessment to understand the customer footprint and do any POCs, then plan and do the actual migration bringing in external resources from our partners. This way, customers can lower their migration risk and costs and they don’t feel like they are on their own. Migration can be a scary word. Sometimes it’s a gut check. A customer may say, I want to migration to AWS in six months. This might be impossible if there is a lot of red tape; it depends on the agility of the customer and if they will be doing it on their own or getting outside help. We have over 200 services now that help with data migrations, such as the AWS Snowball family of devices that make it easier to move large data sets from on-premises to AWS.
Learn more about how Komprise and AWS deliver value and maximize cost savings for the enterprise cloud journey.