Abstract
Images of co-planar points in 3-dimensional space taken from different camera positions are a homography apart. Homographies are at the heart of geometric methods in computer vision and are used in geometric camera calibration, 3D reconstruction, stereo vision and image mosaicking among other tasks. In this paper we show the surprising result that homographies are the apposite tool for relating image colors of the same scene when the capture conditions-illumination color, shading and device-change. Three applications of color homographies are investigated. First, we show that color calibration is correctly formulated as a homography problem. Second, we compare the chromaticity distributions of an image of colorful objects to a database of object chromaticity distributions using homography matching. In the color transfer problem, the colors in one image are mapped so that the resulting image color style matches that of a target image. We show that natural image color transfer can be re-interpreted as a color homography mapping. Experiments demonstrate that solving the color homography problem leads to more accurate calibration, improved color-based object recognition, and we present a new direction for developing natural color transfer algorithms.
Highlights
IN image formation there are two important parts, the geometry of how points in space map to image locations and the photometry of how illumination, surface reflectances and camera sensors combine to form the colors in an image
Homographies are at the heart of geometric methods in computer vision and are used in geometric camera calibration [22], 3D reconstruction [23], stereo vision [24] and image mosaicking [25] amongst other tasks
In Appendix A we present a numerical example where the 4 point Alternating LeastSquares (ALS) minimization fails to solve for the homography
Summary
IN image formation there are two important parts, the geometry of how points in space map to image locations and the photometry of how illumination, surface reflectances and camera sensors combine to form the colors in an image. The color correction transform is found by solving for the homography relating the colors from the RAW to reference display RGBs. Consider that we have a database of colorful objects where the color content of each image is represented by its chromaticity distribution. Even though images I1 and I2 are of the same object their chromaticity distributions don’t match because images I2 and I3 are taken with respect to a warmer illumination color. Our hypothesis is that the chromaticity distributions for the same scene lit by two different lights but where the image shading might change will be related by a homography.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have