Abstract

Image stitching technology aims to generate a Stitched Panoramic Image (SPI) by stitching multiple narrow-view images containing over-lapping regions. However, heterogeneous artifacts may be introduced during the process of image stitching, which affects the visual perception. To automatically and accurately evaluate the perceptual quality of SPIs, a novel local visual and global deep features based blind SPI quality evaluation method is proposed in this paper. To be specific, with the consideration that image stitching mainly destroys structure, texture and color information, we first design a color structure-texture joint dictionary trained from the constructed stitched-specific image patches dataset. Given an input SPI, its local visual and global deep features are extracted to characterize the stitched-specific distortions. For local visual features, the trained dictionary is employed to capture structure, texture and color distortions by sparse features extraction. Then, considering that sparse features are insensitive to weak structural distortions, weighted local binary pattern features are extracted to measure various weak distortions. For global perceptual features, deep features are extracted via a pre-trained convolutional neural network model to represent the high-level semantics. Finally, considering the diversity of the extracted features, an ensemble learning strategy is adopted to promote the generalization performance and prediction accuracy of the proposed model. Experimental results show that, compared with the conventional 2D and SPI quality measurement methods, the proposed method can measure the stitched-specific distortions more accurately, and is more coincident with subjective ratings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call