Abstract

This paper presents a novel cellular connectionist model for the implementation of a clustering-based adaptive quantization in video coding applications. The adaptive quantization has been designed for a wavelet-based video coding system with a desired scene adaptive and signal adaptive quantization. Since the adaptive quantization is accomplished through a maximum a posteriori probability (MAP) estimation-based clustering process, its massive computation of neighborhood constraints makes it difficult for a software-based real-time implementation of video coding applications. The proposed cellular connectionist model aims at designing an architecture for the real-time implementation of the clustering-based adaptive quantization. With a cellular neural network architecture mapping onto the image domain, the powerful Gibbs spatial constraints are realized through interactions among neurons connected with their neighbors. In addition, the computation of coefficient distribution is designed as an external input to each component of a neuron or processing element (PE). We prove that the proposed cellular neural network does converge to the desired steady state with the proposed, update scheme. This model also provides a general architecture for image processing tasks with Gibbs spatial constraint-based computations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.