<p>Turbo codes are error-correcting codes with performance that is close to the Shannon theoretical limit (SHA). The motivation for using turbo codes is that the codes are an appealing mix of a random appearance on the channel and a physically realizable decoding structure. The communication systems have the problem of latency, fast switching, and reliable data transfer. The objective of the research paper is to design and turbo encoder and decoder hardware chip and analyze its performance. Two convolutional codes are concatenated concurrently and detached by an interleaver or permuter in the turbo encoder. The expected data from the channel is interpreted iteratively using the two related decoders. The soft (probabilistic) data about an individual bit of the decoded structure is passed in each cycle from one elementary decoder to the next, and this information is updated regularly. The performance of the chip is also verified using the maximum a posteriori (MAP) method in the decoder chip. The performance of field-programmable gate array (FPGA) hardware is evaluated using hardware and timing parameters extracted from Xilinx ISE 14.7. The parallel concatenation offers a better global rate for the same component code performance, and reduced delay, low hardware complexity, and higher frequency support.</p>
Read full abstract