Abstract

The success of deep learning has encouraged its applications in decoding error-correcting codes, e.g., LDPC decoding. In this paper, we propose a model-driven deep learning method for normalized min-sum (NMS) low-density parity-check (LDPC) decoding, namely neural NMS (NNMS) LDPC decoding network. By unfolding the iterative decoding progress between checking nodes (CNs) and variable nodes (VNs) into a feed-forward propagation network, we can harvest the benefits of both the model-driven deep learning and the conventional normalized min-sum (NMS) LDPC decoding method. In addition, we proposed a shared parameters NNMS with the LeakyReLU and a 12-bit quantizer (SNNMS-LR-Q) which reduces the number of required multipliers and correction factors by sharing parameters, increasing the nonlinear fitting ability by adding LeakyReLU. By utilizing the 12-bit quantizer, we can improve the confrontation ability. Thorough experiments with different code lengths, code rates, channel conditions, and check matrices are implemented to demonstrate the advantages and robustness of our proposed networks. The BER performance of the proposed NNMS is 1.5 dB better than the NMS, using fewer iterations. Meanwhile, The SNNMS-LR-Q outperforms the NNMS regarding the BER performance and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call