Abstract
The error-correcting capability of tailbiting codes generated by convolutional encoders is described. In order to obtain a description beyond what the minimum distance dmin of the tailbiting code implies, the active tailbiting segment distance is introduced. The description of correctable error patterns via active distances leads to an upper bound on the decoding block error probability of tailbiting codes. The necessary length of a tailbiting code so that its minimum distance is equal to the free distance dfree of the convolutional code encoded by the same encoder is easily obtained from the active tailbiting segment distance. This is useful when designing and analyzing concatenated convolutional codes with component codes that are terminated using the tailbiting method. Lower bounds on the active tailbiting segment distance and an upper bound on the ratio between the tailbiting length and memory of the convolutional generator matrix such that dmin equals dfree are derived. Furthermore, affine lower bounds on the active tailbiting segment distance suggest that good tailbiting codes are generated by convolutional encoders with large active-distance slopes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.