Abstract

The state-of-the-art decoding algorithm of polar codes is the successive cancellation list (SCL) decoder which approaches the error-rate performance of the maximum like lihood (ML) decoder assuming a sufficiently large list size. However, SCL decoding comes at the price of a high decoding latency due to its sequential nature and, further, high throughput implementations are challenging. Contrarily, the iterative belief propagation (BP) decoder offers inherent parallelism with the possibility of high throughput and low-latency implementations but suffers from a degraded error-rate performance. We train a neural network (NN)-based estimator to pre-select - only based on the received noisy channel observation - the most efficient decoder (i.e., BP or SCL) before decoding. In other words, we aim to have the best of both worlds and, thus, to use the BP decoder whenever possible. This enables the same (good) error-rate performance of the SCL decoder but with a higher decoding efficiency and, in particular, with an - on average - lower decoding latency. We implement and train this scheme as a ternary classifier which can immediately ask for a re-transmission when no decoding success at all is expected. Further, we show that the main difficulty during training is the unequal distribution of events in the dataset (for practical signal-to-noise-ratio (SNR) a decoding failure is an unlikely event) that, however, essentially impacts the effective overall performance. Finally, we analyze different training methodologies to circumvent this limitation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call