Abstract

The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy (excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and M-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

Highlights

  • Entropy coding, producing variable length codewords (VLCs), is a core component of any data compression scheme

  • VLCs are very sensitive to channel noise: when some bits are altered by the channel, synchronization losses can occur at the receiver, the position of symbol boundaries are not properly estimated, leading to dramatic symbol error rates (SERs)

  • The first experiment aimed at comparing the performances in terms of soft decoding of Huffman codes [7], arithmetic codes [19], and quasi-arithmetic codes with T = 4 for comparable overall rates

Read more

Summary

INTRODUCTION

Entropy coding, producing variable length codewords (VLCs), is a core component of any data compression scheme. For a comparable overall rate, in comparison with Huffman codes, better compression efficiency of quasi-arithmetic codes allows to dedicate extra redundancy (short “soft” synchronization patterns) to decoder resynchronization, resulting in significantly higher error resilience. The usage of CCs is considered in order to reduce the bit error rate seen by the source estimation algorithm The latter can be placed in an iterative decoding structure in the spirit of serially concatenated turbo codes, provided that the channel decoder and the quasi-arithmetic decoder are separated by an interleaver. This material is exploited in the sequel (Sections 7 and 8) for explaining the estimation algorithm and the soft synchronization procedure. Simulations results of the joint source-channel turbo decoding algorithm in comparison with soft decoding of quasi-arithmetic codes are provided

NOTATIONS AND PROBLEM STATEMENT
ARITHMETIC CODING PRINCIPLES
FAST-REDUCED PRECISION IMPLEMENTATION
Quasi-arithmetic coding
Quasi-arithmetic decoding
Source distribution approximation
SOURCE MODEL
MODELING BIT STREAM DEPENDENCIES
Product model: source and coder
Product model: source and decoder
ESTIMATION ALGORITHM
SOFT SYNCHRONIZATION
ITERATIVE CC-AC DECODING ALGORITHM
10. EXPERIMENTAL RESULTS
11. CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.