Abstract
The use of a graphics processing unit (GPU) together with a CPU, referred as GPU-accelerated computing, to accelerate tasks that requires extensive computations has been the trends for last a few years in high performance computing. In this paper, we propose a new paradigm of GPU-accelerated method to parallelize extraction of a set of features based on the gray-level co-occurrence matrix (GLCM), which may be the most widely, used method. The method is evaluated on various GPU devices and compared with its serial counterpart implemented and optimized in both Matlab and C on a single machine. A series of experimental tests focused on magnetic resonance (MR) brain images demonstrate that the proposed method is very efficient and superior to its serial counterpart, as it could achieve more than 25–105 folds of speedup for single precision and more than 15–85 folds of speedup for double precision on Geforce GTX 1080 along different size of ROIs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.