Abstract

The degree to which Turbo-Code decoder architectures can be parallelized is constrained by requirements for flexibility with respect to code block sizes and code rates. At the same time throughput requirements are expected to increase by a factor of up to 20x for 5G networks, which are currently undergoing standardization. The limiting factors for the throughput of a Turbo-Code decoder are maximum clock frequency and maximum degree of parallelization on architecture level. The maximum clock frequency is determined by the critical path, which, for the Map decoder, lies in the add-compare-select operations. In this paper, we investigate the use of bit-level pipelined add-compare-select units in a highly parallel Turbo-Code decoder. We extend the concept, which has up to now only been used for Viterbi decoders, to support a minimum selection and investigate in detail different number representations. Moreover, we present a fully LTE-A Pro compatible decoder architecture based on bit-level pipelining and show, that bit-level pipelining allows a 14% increase of throughput at the cost of a 40% area increase.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call