Abstract

In this letter, we present a graphics processing unit (GPU)-based LDPC convolutional code (LDPC-CC) pipeline decoder with optimized parallelism. The proposed decoder exploits different granularities of decoding parallelism for both the compute unified device architecture (CUDA) kernel execution stage and the data transfer stage. Moreover, the parameter selection criteria for decoder implementation are designed to avoid exhaustive search of all the combinations of parameters. The experiments are carried out on Nvidia GTX460 and GTX580 platforms. The results demonstrate the proposed decoder achieves about 3 times speedup compared to the existing GPU-based work.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.