Abstract
Turbo decoders inherently have large decoding latency and low throughput due to iterative decoding. To increase the throughput and reduce the latency, high-speed decoding schemes have to be employed. In this paper, following a discussion on basic parallel decoding architectures, the segmented sliding window approach and two other types of area-efficient parallel decoding schemes are proposed. Detailed comparison on storage requirement, number of computation units, and the overall decoding latency is provided for various decoding schemes with different levels of parallelism. Hybrid parallel decoding schemes are proposed as an attractive solution for very high level parallelism implementations. To reduce the storage bottleneck for each subdecoder, a modified version of the partial storage of state metrics approach is presented. The new approach achieves a better tradeoff between storage part and recomputation part in general. The application of the pipeline-interleaving technique to parallel turbo decoding architectures is also presented. Simulation results demonstrate that the proposed area-efficient parallel decoding schemes do not cause performance degradation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.