Abstract

<p>Turbo codes are error-correcting codes with performance that is close to the Shannon theoretical limit (SHA). The motivation for using turbo codes is that the codes are an appealing mix of a random appearance on the channel and a physically realizable decoding structure. The communication systems have the problem of latency, fast switching, and reliable data transfer. The objective of the research paper is to design and turbo encoder and decoder hardware chip and analyze its performance. Two convolutional codes are concatenated concurrently and detached by an interleaver or permuter in the turbo encoder. The expected data from the channel is interpreted iteratively using the two related decoders. The soft (probabilistic) data about an individual bit of the decoded structure is passed in each cycle from one elementary decoder to the next, and this information is updated regularly. The performance of the chip is also verified using the maximum a posteriori (MAP) method in the decoder chip. The performance of field-programmable gate array (FPGA) hardware is evaluated using hardware and timing parameters extracted from Xilinx ISE 14.7. The parallel concatenation offers a better global rate for the same component code performance, and reduced delay, low hardware complexity, and higher frequency support.</p>

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.