Abstract
The demand for scalable and fast error decoders has recently increased in software-defined radio-based communication systems. Hamming code, which is one of the promising error decoders, shows acceptable accuracy; however, the computational complexity of the decoder limits its use in real-time communication. To address this issue, this paper proposes a fully parallel implementation of the (7, 4) Hamming code on a graphics processing unit (GPU) by exploiting massive data-parallelism and increasing on-chip constant memory accesses. To further improve the performance of this proposed parallel approach, this paper explores the impact of different thread/block configurations and selects optimal thread/block configurations, which can occupy more hardware resources for performing parity checks, error detection and correction, and decoding of the received codeword. In addition, the proposed GPU-based Hamming decoder can provide significant scalability by supporting different message sizes, including 355,907 bytes, 2,959,475 bytes, and 12,835,890 bytes. To verify the effectiveness of the GPU-based parallel Hamming decoder, this paper compares its performance with that of the multi-threading central processing unit (CPU) approach which is executed on an Intel multi-core processor. Experimental results indicate that the proposed GPU-based decoder operates at least 15.13 times faster and reduces the energy consumption by up to 913.17 % compared to the multi-threading CPU-based approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.