Abstract

Two parallelized strategies for fast wavelet transform based on GPU acceleration are presented, according to the theory of fast wavelet transform: (1) We use a 2D version of the analysis and synthesis filter banks by applying a 1D analysis filter bank to the columns of the image and then to the rows, meanwhile use CUDA to reprogram the process, (2) Fast wavelet transform is implemented using the highly optimized CUBLAS library by the product of matrixes. The results indicate that 25 times of speedup can be achieved on the GPU as compared to the CPU counterpart for one scale Daubechies’ wavelet transform, which demonstrates the significance of parallelizing the fast wavelet transform algorithm using the general-purpose GPU technology. The performance optimization strategies and the CUDA GPU occupancies are intensively measured and discussed

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.