Abstract

BackgroundIn Overlap-Layout-Consensus (OLC) based de novo assembly, all reads must be compared with every other read to find overlaps. This makes the process rather slow and limits the practicality of using de novo assembly methods at a large scale in the field. Darwin is a fast and accurate read overlapper that can be used for de novo assembly of state-of-the-art third generation long DNA reads. Darwin is designed to be hardware-friendly and can be accelerated on specialized computer system hardware to achieve higher performance.ResultsThis work accelerates Darwin on GPUs. Using real Pacbio data, our GPU implementation on Tesla K40 has shown a speedup of 109x vs 8 CPU threads of an Intel Xeon machine and 24x vs 64 threads of IBM Power8 machine. The GPU implementation supports both linear and affine gap, scoring model. The results show that the GPU implementation can achieve the same high speedup for different scoring schemes.ConclusionsThe GPU implementation proposed in this work shows significant improvement in performance compared to the CPU version, thereby making it accessible for utilization as a practical read overlapper in a DNA assembly pipeline. Furthermore, our GPU acceleration can also be used for performing fast Smith-Waterman alignment between long DNA reads. GPU hardware has become commonly available in the field today, making the proposed acceleration accessible to a larger public. The implementation is available at https://github.com/Tongdongq/darwin-gpu.

Highlights

  • ResultsThis work accelerates Darwin on Graphical Processing Unit (GPU). Using real Pacbio data, our GPU implementation on Tesla K40 has shown a speedup of 109x vs 8 Central Processing Unit (CPU) threads of an Intel Xeon machine and 24x vs 64 threads of IBM Power machine

  • In Overlap-Layout-Consensus (OLC) based de novo assembly, all reads must be compared with every other read to find overlaps

  • Read overlapping is an important step in OLC based de novo assemblers

Read more

Summary

Results

We compared our GPU acceleration with the hand-optimized CPU version of Darwin [26] (commit: 16bdb). Even with 50 megabytes input dataset, the CPU implementation takes more than 2 hours to run on 8 threads of the Intel machine (Fig. 8). We show that the fastest execution time is obtained with 8 CPU threads running with the GPU launch parameters of 32 blocks and 64 threads per block. We show that the GPU acceleration without coalescing is 2.4x faster than the CPU implementation. Our GPU acceleration is effective for both linear and affine gap penalty scoring models

Conclusions
Background
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call