The most powerful channel coding schemes, namely, those based on turbo codes and LPDC (Low density parity check) codes have in common principle of iterative decoding. Shannon's predictions for optimal codes would imply random like codes, intuitively implying that the decoding operation on these codes would be prohibitively complex. A brief comparison of Turbo codes and LDPC codes will be given in this section, both in term of performance and complexity. In order to give a fair comparison of the codes, we use codes of the same input word length when comparing. The rate of both codes is R = 1/2. However, the Berrou's coding scheme could be constructed by combining two or more simple codes. These codes could then be decoded separately, whilst exchanging probabilistic, or uncertainty, information about the quality of the decoding of each bit to each other. This implied that complex codes had now become practical. This discovery triggered a series of new, focused research programmes, and prominent researchers devoted their time to this new area.. Leading on from the work from Turbo codes, MacKay at the University of Cambridge revisited some 35 year old work originally undertaken by Gallagher (5), who had constructed a class of codes dubbed Low Density Parity Check (LDPC) codes. Building on the increased understanding on iterative decoding and probability propagation on graphs that led on from the work on Turbo codes, MacKay could now show that Low Density Parity Check (LDPC) codes could be decoded in a similar manner to Turbo codes, and may actually be able to beat the Turbo codes (6). As a review, this paper will consider both these classes of codes, and compare the performance and the complexity of these codes. A description of both classes of codes will be given. I. Turbo Codes There are different ways of constructing Turbo codes, but the commonality is that they use two or more codes in the encoding process. The constituent encoders are typically recursive, systematic convolutional (RSC) codes (3). Figure 1 shows a general set-up of the encoder. The number of bits in the code word varies with the rate of the code. For an overall code rate of R = 1/2, it is normal to use two R = 1/2 RSC codes, where every systematic bit is transmitted, but for the coded bits a puncturing matrix is used where only every second code bit from each of the constituent codes is included in the transmitted symbol. The permutation matrix Π permutes the systematic bits, so that the two constituent codes are excited by the same set of information bits, albeit in a different order. The permutation matrix is a critical design factor for designing good codes. The code word consists o f Ci as the systematic bit (di) and Ci and Ci which are vectors of encoded bits from the two convolution codes. These vectors can contain one or more bits and di is the information bit, Π is the permutation matrix.
Read full abstract