Abstract
The use of 3D and stereo imaging is rapidly increasing. Compression, transmission, and processing could degrade the quality of stereo images. Quality assessment of such images is different than their 2D counterparts. Metrics, which represent 3D perception by human visual system (HVS), are expected to assess stereoscopic quality more accurately. In this paper, inspired by brain sensory/motor fusion process, by combining the right and left images, we form two synthesized images. Effects of different structural distortions on statistical distributions of the synthesized images are analyzed. Based on the observed statistical changes, features are extracted from these synthesized images that can reveal type and severity of distortions. Then, we propose a stacked neural network model to learn the extracted features and accurately predict the quality of stereo images. This model is tested on 3D images of popular databases. Experimental results show the superiority of this method over state-of-the-art stereo image quality assessment approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.