Abstract

AbstractViterbi decoding and sequential decoding are well known as the maximum or approximate maximum likelihood decoding methods for the convolutional code. The decoding error rates of those decoding methods have been examined in the past. However, in those discussions the evaluation has been made by assuming the channel as memoryless. By contrast, this paper proposes several maximum likelihood decoding methods for the convolutional code in the channel model with memory, such as the Gilbert model. In decoding method (I), the conditional probability of the transmitted sequence for the received sequence is determined for each state of the channel. Using the result as the metric, the state of the channel is considered in the same way as the code trellis. The decoding is made by suitably selecting the metric. In decoding method (II), the state of the channel is estimated to suppress the computational complexity with the increase of the channel states. These methods are applied to Viterbi decoding. It is shown by computer simulation that the decoding error rate is improved over the traditional Viterbi decoding with equal complexity in instrumentation compared with the traditional decoding methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call