Abstract

For pt.I see ibid, vol.2, no.2, p.187, 1992, For applications in graphic computers, image and video composition, high-definition television (HDTV), and optical fiber networks, Huffman-coded images need to be reconstructed at a high throughput rate. Part I showed several architectural and architecture-specific optimization techniques. However, due to the recursion within the reconstruction algorithm, the achievable throughput rate for a given decoding architecture in a given IC technology is limited. The authors propose various concurrent decoding methods to relax the throughput limit by using parallel or pipelined hardware. These methods are simple, effective, flexible, and applicable to general decoder architectures. Unlimited concurrency can be achieved at the expense of additional latency, the overhead is low, and the complexity increases linearly with the throughput improvement. It is believed that the proposed methods and architectures make it possible to reconstruct arbitrarily high resolution Huffman-coded images and video in real time with current electronics. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call