Abstract

In multiview image stitching, the colors of images in a scene might vary when images are taken under different illumination or camera settings. A common way to produce a seamless stitched image is to transform the colors of a target image to match that of a source image. In this paper we present a color transfer method based on two premises: first, pixels in the generated image should have similar colors with their corresponding pixels in the source image. Second, pixels with similar colors should still have similar colors after color transfer. Our method can be considered as a semisupervised manifold learning approach, where the corresponding pixels of the input images serve as the labeled data. Our goal is to learn a final image which not only shares the same colors with the source image but also has the same image structure with the target image. While manifold learning methods aim to find an embedded space to represent the data with minimum structure loss, the proposed method further constrains the solution space using the labeled data. This paper introduces a parametric linear method and a nonparametric nonlinear method to tackle different types of color changes. Experimental results show the effectiveness of our methods both quantitatively and qualitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call