The rising cost of storage in genomic IT was a hot topic at BioITWorld 2018 where we presented a healthcare case study with Google and had several in-depth discussions on the importance of efficient data management at scale. Here are our top 3 takeaways from this year’s conference:
1) Storage is a major cost element in genomic IT
Growth in genomics data generation and research storage is not slowing down anytime soon. Organizations are actively exploring ideas to manage massive data growth without breaking the bank.
2) Massive Scale Data Management Cannot be a Science Project
A key theme we heard repeatedly was that organizations are tired of science projects – using open source tools and frameworks to cobble together home-grown solutions is very laborious, time-consuming, error-prone, and costly.
Data management needs to become a turnkey commercially supported solution that works across storage vendors and platforms without lock-in. Several organizations spoke about the years they have spent trying to still get a workable solution from their home-grown efforts.
Watch how Pacific BioSciences, a genomics leader, used Komprise to transform how they manage genomic test data and 700% YOY storage capacity growth.
Bio-IT Data Cannot be Locked Into Storage Silos
The efficient management of data-at-scale requires data to be archived to lower cost capacity storage options such as object storage, tape and/or the cloud—but if users have to go to a new namespace to find this data, or if the data cannot really be used without going through the primary storage, then the value is of the data is very limited. BioIT organizations are looking for ways to archive and manage data in such a way that the data is transparently accessible no matter where it sits, and in ways that do not break native analytics capabilities of platforms such as the cloud.
(Source: Stephens ZD, Lee SY, Faghri F, Campbell RH, Zhai C, Efron MJ, et al. (2015) Big Data: Astronomical or Genomical?. PLoS Biol 13(7): e1002195. doi:10.1371/journal.pbio.10021)