Abstract

Image stitching-composition of different viewpoint images to form a 360-degree panoramic image-is an essential component towards immersive applications like VR and AR. A no-reference (NR) quality metric specifically designed to evaluate stitched panoramic images is highly desirable when ground-truth reference images are not available. In this paper, we use Convolutional Sparse Coding (CSC) with a set of convolutional filters to locate stitching-specific distortions in a target image, and design trained kernels to quantify the compound effects of multiple distortion types in a local region. Specifically, our contributions are: i) a training database labeled with location information of the distortion regions is released; ii) a NR metric is proposed to accurately assess stitching-specific artifacts like ghosting using convolutional sparse coding; and iii) a novel sequential feature selection algorithm is proposed to quantify the aforementioned compound distortion effects. In extensive experiments, we show that the performance of our proposed NR metric is comparable to the state-of-the-art full-reference metrics designed for stitched images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call