Abstract

Working at Bell Labs in 1950, irritated with error-prone punched card readers, R W Hamming began working on error-correcting codes, which became the most used error-detecting and correcting approach in the field of channel coding in the future. Using this parity-based coding, two-bit error detection and one-bit error correction was achievable. Channel coding was expanded further to correct burst errors in data. Depending upon the use of the number of data bits ‘d’ and parity bits ‘k’ the code is specified as (n, k) code, here ‘n’ is the total length of the code (d+k). It means that 'k' parity bits are required to protect 'd' data bits, which also means that parity bits are redundant if the code word contains no errors. Due to the framed relationship between data bits and parity bits of the valid codewords, the parity bits can be easily computed, and hence the information represented by 'n' bits can be represented by 'd' bits. By removing these unnecessary bits, it is possible to produce the optimal (i.e., shortest length) representation of the image data. This work proposes a digital image compression technique based on Hamming codes. Lossless and near-lossless compression depending upon need can be achieved using several code specifications as mentioned here. The achieved compression ratio, computational cost, and time complexity of the suggested approach with various specifications are evaluated and compared, along with the quality of decompressed images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call