Big biomedical data create exciting opportunities for discovery, but make it difficult to capture analyses and outputs in forms that are findable, accessible, interoperable, and reusable (FAIR). In response, we describe tools that make it easy to capture, and assign identifiers to, data and code throughout the data lifecycle. We illustrate the use of these tools via a case study involving a multi-step analysis that creates an atlas of putative transcription factor binding sites from terabytes of ENCODE DNase I hypersensitive sites sequencing data. We show how the tools automate routine but complex tasks, capture analysis algorithms in understandable and reusable forms, and harness fast networks and powerful cloud computers to process data rapidly, all without sacrificing usability or reproducibility-thus ensuring that big data are not hard-to-(re)use data. We evaluate our approach via a user study, and show that 91% of participants were able to replicate a complex analysis involving considerable data volumes.
Institute for Systems Biology
Madduri, Ravi; Chard, Kyle; D'Arcy, Mike; Jung, Segun C; Rodriguez, Alexis; Sulakhe, Dinanath; Deutsch, Eric W; Funk, Cory C; Heavner, Ben; Richards, Matthew A; Shannon, Paul; Glusman, Gustavo; Price, Nathan D; Kesselman, Carl; and Foster, Ian, "Reproducible big data science: A case study in continuous FAIRness." (2019). Articles, Abstracts, and Reports. 1464.