Abstract

List decoding is a promising technique for machine type communications (MTC) and other applications that pursue the high-coding gain of convolutional codes. However, there are obstacles that degrade the availability of list decoding. In specific, non-tail-biting list decoding involves high demands of data storage, while tail-biting list decoding requires substantial computational resources to preserve the optimal performance. In this paper, we rethink the parallel list decoder design from the aspects of algorithm and implementation to circumvent the foregoing obstacles. On the one hand, internal relations among the multiple decoding sequences are revealed and leveraged to redesign the non-tail-biting list decoding algorithm, which enables the design to extricate from the massive storage expense. On the other hand, a reliability-ordered initial-state estimator is designed for the tail-biting list decoder, which helps to alleviate the computational burden while retaining the optimal error-correction performance. In conjunction with the optimizations on underlying structures, the proposed list decoder achieves better energy efficiency than existing work under the same coding gain. In the MTC scenario, moreover, the proposed design will be less area consuming than existing schemes to fulfill coding gain enhancement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call