Abstract

Processing big data is a computationally demanding task which has usually been fulfilled by HPC batch systems. These complex systems pose a challenge to scientists due to their cumbersome nature and changing environment. The scientists often lack deeper informatics understanding and experiment reproducibility is increasingly becoming a hard request on the research validity.A new computational paradigm — containers — are meant to contain all dependencies and persist the state which help reproducibility. They have gained a lot of popularity in the informatics community but HPC community remains skeptical and doubts that container platforms are appropriate for demanding tasks or that such infrastructure can reach significant performance.In this paper, we observe the performance of various infrastructure types (HPC, Kubernetes, local) on a Sarek Nextflow bioinformatics workflow with real life genomics data of various sizes. We analyze obtained workload trace and discuss pros and cons of utilized infrastructures. We also show some approaches perform better in terms of available resources but others are more suitable for diversified workflows. Based on the results, we provide recommendations for life science groups which plan to analyze data in large scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call