Abstract

The decoding of long memory high-rate punctured convolutional codes by sequential decoding algorithms is investigated. Both the stack and the Fano algorithms have been thoroughly tested through computer simulation with coding rates ranging from R=2/spl sol/3 to R=/spl frac78/. Error and overflow probabilities and variability of decoding effort are similar for both algorithms. With hard quantization, plateaus appear in the cumulatives of decoding effort for both algorithms. Comparing the punctured approach of decoding to the more traditional technique for high-rate codes, it is found that punctured decoders perform a larger number of simpler computations, so that the overall decoding effort is on the average more important for the usual decoder than it is for its punctured counterpart. Finally, computational variability, error and overflow probabilities are no worse for punctured decoders than they are for normal decoders. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call