Abstract

It is known the belief propagation variants of linear codes can be readily unrolled as neural networks, after assigning learnable weights on the message-passing edges. Contrary to the conventional top-down training process, where the distillation occurs in the form of pruning or sharing when downsizing model is required, a new bottom-up design methodology to augment performance of the raw min-sum decoder of LDPC codes is proposed, by introducing incrementally a few parameters in the specific positions of corresponding neural network. Then a novel postprocessing method, devised to further improve performance, can cope with decoding failures effectively. In the training process, a simplified scheme of generating training data is presented via exploiting an approximation to the targeted mixture density, and it is found the evaluation of trained parameters converges after sufficient iterations, indicating its generality with an arbitrary designated number of iterations. Lastly, an extensive simulation of three codes carried on the AWGN or Rayleigh fading channels demonstrates the design reaches a good tradeoff of low-complexity and comparable decoding performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call