Abstract

Two parallelized strategies for fast wavelet transform based on GPU acceleration are presented, according to the theory of fast wavelet transform: (1) We use a 2D version of the analysis and synthesis filter banks by applying a 1D analysis filter bank to the columns of the image and then to the rows, meanwhile use CUDA to reprogram the process, (2) Fast wavelet transform is implemented using the highly optimized CUBLAS library by the product of matrixes. The results indicate that 25 times of speedup can be achieved on the GPU as compared to the CPU counterpart for one scale Daubechies’ wavelet transform, which demonstrates the significance of parallelizing the fast wavelet transform algorithm using the general-purpose GPU technology. The performance optimization strategies and the CUDA GPU occupancies are intensively measured and discussed

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call