Abstract

The analysis of complex biological datasets beyond DNA scenarios is gaining increasing interest in current bioinformatics. Particularly, protein sequence data introduce additional complexity layers that impose new challenges from a computational perspective. This work is aimed at investigating GPU solutions to address these issues in a representative algorithm from the phylogenetics field: Fitch’s parsimony. GPU strategies are adopted in accordance with the protein-based formulation of the problem, defining an optimized kernel that takes advantage of data parallelism at the calculations associated with different amino acids. In order to understand the relationship between problem sizes and GPU capabilities, an extensive evaluation on a wide range of GPUs is conducted, covering all the recent NVIDIA GPU architectures—from Kepler to Turing. Experimental results on five real-world datasets point out the benefits that imply the exploitation of state-of-the-art GPUs, representing a fitting approach to address the increasing hardness of protein sequence datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call