Abstract
De novo genome assembly is a fundamental problem in the field of bioinformatics, that aims to assemble the DNA sequence of an unknown genome from numerous short DNA fragments (aka reads) obtained from it. With the advent of high-throughput sequencing technologies, billions of reads can be generated in a matter of hours, necessitating efficient parallelization of the assembly process. While multiple parallel solutions have been proposed in the past, conducting a large-scale assembly at scale remains a challenging problem because of the inherent complexities associated with data movement, and irregular access footprints of memory and I/O operations. In this article, we present a novel algorithm, called PaKman , to address the problem of performing large-scale genome assemblies on a distributed memory parallel computer. Our approach focuses on improving performance through a combination of novel data structures and algorithmic strategies for reducing the communication and I/O footprint during the assembly process. PaKman presents a solution for the two most time-consuming phases in the full genome assembly pipeline, namely, k-mer counting and contig generation . A key aspect of our algorithm is its graph data structure (PaK-Graph), which comprises fat nodes (or what we call “macro-nodes”) that reduce the communication burden during contig generation. We present an extensive performance and qualitative evaluation of our algorithm across a wide range of genomes (varying in both size and species group), including comparisons to other state-of-the-art parallel assemblers. Our results demonstrate the ability to achieve near-linear speedups on up to 16K cores (tested) on the NERSC Cori supercomputer; perform better than or comparable to other state-of-the-art distributed memory and shared memory tools in terms of performance while delivering comparable (if not better) quality; and reduce time to solution significantly. For instance, PaKman is able to generate a high-quality set of assembled contigs for complex genomes such as the human and bread wheat genomes in under a minute on 16K cores. In addition, PaKman was able to successfully process a 3.1 TB simulated dataset of one of the largest known genomes (to date)- Ambystoma mexicanum (the axolotl), in just over 200 seconds on 16K cores.
Highlights
DE novo genome assembly is a fundamental problem in computational biology
We compared the performance of PaKman with the latest version of HipMer (v0.1.2.1) available on Cori, a state-of-the-art distributed memory genome assembly tool [14], [15]
Even though PaKman is designed for distributed memory machines, it can be used on shared memory systems that support MPI
Summary
DE novo genome assembly is a fundamental problem in computational biology. The goal is to assemble the DNA sequence of an unknown (target) genome using the short fragments (called “reads”) obtained from it through sequencing technologies. The output is a set of “contigs” that represent contiguous portions of the target genome. The genome assembly problem has been a topic of interest for well over three decades and yet the need for new scalable approaches has never been more critical than it is today. The factor driving this need is the continuously
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Parallel and Distributed Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.