Abstract

A deep learning method for improving the performance of polar belief propagation (BP) decoder equipped with a one-bit quantizer is proposed. The method generalizes the standard polar BP algorithm by assigning weights to the layers of the unfolded factor graph. These weights can be learned autonomously using deep learning techniques. We prove that the improved polar BP decoder has a symmetric structure, so that the weights can be trained by an all-zero codeword rather than an exponential number of codewords. In order to accelerate the training convergence, a layer-based weight assignment scheme is designed, which decreases the amount of trainable weights. Simulation results show that the improved polar BP decoder with a one-bit quantizer outperforms the standard polar BP decoder with a 2-bit quantizer and achieves faster convergence.

Highlights

  • Polar codes are a novel channel coding technique proposed by Arıkan [1] and have been proven to achieve the capacity of binary-input discrete memoryless channels (B-DMCs) under the successive cancellation (SC) decoding

  • The polar belief propagation (BP) algorithm can be performed over a factor graph, as illustrated in Fig. 1, where the message nodes are represented by black circles, and the processing elements (PEs) are represented in dotted rectangles

  • We can see from the table that the neural network decoder (NND) achieves the block error rate (BLER) performance with fewer training batches than MSBP, which means that the proposed layer-based weighting scheme improves the learning efficiency of the neural network

Read more

Summary

INTRODUCTION

Polar codes are a novel channel coding technique proposed by Arıkan [1] and have been proven to achieve the capacity of binary-input discrete memoryless channels (B-DMCs) under the successive cancellation (SC) decoding. One-bit quantized massive multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) systems are studied [7]–[9] In applications such as the Internet of Things, cyber-physical systems or wireless sensor networks, low-delay transfer of analog measurements is a more relevant task for high-rate communication [10], [11]. A series of model-driven neural network decoders are proposed in [17]–[23], which can be trained with an all-zero codeword They are built on the standard BP algorithm and represented as BP-NNs. BP-NNs lack a theoretical basis and cannot maintain fast training convergence in the zero-delay model equipped with a one-bit quantizer. J. Gao et al.: Learning to Decode Polar Codes With One-Bit Quantizer property of BP-NNs that can be trained by an all-zero codeword.

POLAR CODING
BP DECODING OF POLAR CODES
THE SYSTEM MODEL WITH A ONE-BIT QUANTIZER
THE NND STRUCTURE
LAYER-BASED WEIGHTING SCHEME
SYMMETRY OF THE NND
TRAINING THE NND
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.