Abstract

For efficient cellular communication channel usage, we propose a neural computation model for image coding. In a constant-time unsupervised learning, our neural model approximates optimal pattern clustering from training example images through a memory adaptation process, and builds a compression codebook in its synaptic weight matrix. This neural codebook can be distributed to both ends of a transmission channel for fast codec operations on general images. The transmission is merely the indices of the codebook entries best matching the patterns in the image to be transmitted. These indices can further be compressed through a classical entropy coding method to yield even more transmission reduction. Other advantages of our model are the low training time complexity, high utilization of neurons, robust pattern clustering capability, and simple computation. A VLSI implementation is also highly suitable for the intrinsic parallel nature of neural networks. Our compression results are competitive compared to JPEG and wavelet methods. We also reveal the general codebook's cross-compression results, filtering effects by special training methods, and learning enhancement techniques for obtaining a compact codebook to yield both high compression and picture quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.