Abstract

This paper presents SeQual, a scalable tool to efficiently perform quality control of large genomic datasets. Our tool currently supports more than 30 different operations (e.g., filtering, trimming, formatting) that can be applied to DNA/RNA reads in FASTQ/FASTA formats to improve subsequent downstream analyses, while providing a simple and user-friendly graphical interface for non-expert users. Furthermore, SeQual takes full advantage of Big Data technologies to process massive datasets on distributed-memory systems such as clusters by relying on the open-source Apache Spark cluster computing framework. Our scalable Spark-based implementation allows to reduce the runtime from more than three hours to less than 20 minutes when processing a paired-end dataset with 251 million reads per input file on an 8-node multi-core cluster.

Highlights

  • T HE development of Next-Generation Sequencing (NGS) technologies [1], [2] has revolutionized biological research over the last decade by drastically decreasing the cost of DNA/RNA sequencing and significantly increasing the throughput of generated data

  • Most bioinformatics pipelines start by applying a quality control over the input datasets in order to increase the accuracy of subsequent processing

  • To efficiently implement all the functionality provided by SeQual, each supported quality operation must be translated into the appropriate combination of transformations/actions to be performed over the input Resilient Distributed Dataset (RDD) which have been previously created using the Hadoop Sequence Parser (HSP) library

Read more

Summary

INTRODUCTION

T HE development of Next-Generation Sequencing (NGS) technologies [1], [2] has revolutionized biological research over the last decade by drastically decreasing the cost of DNA/RNA sequencing and significantly increasing the throughput of generated data. Most bioinformatics pipelines start by applying a quality control over the input datasets in order to increase the accuracy of subsequent processing Some examples of these operations are the removal of duplicate reads, the deletion of reads with low average quality, or their transformation to maintain only the fragments with high quality (trimming). There are some parallel tools that allow to accelerate their computations on sharedmemory systems thanks to including efficient multithreading support, this is not enough to complete the quality control of current large datasets in reasonable time since their scalability is limited to the resources of a single machine In this context, the exploitation of Big Data technologies seems an adequate approach in order to accelerate those calculations on distributed-memory systems such as clusters and cloud platforms, as extensively demonstrated by the existing literature [6]–[8].

RELATED WORK
IMPLEMENTATION
APACHE SPARK
SPARK-BASED QUALITY CONTROL AND PREPROCESSING
PERFORMANCE EVALUATION
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call