Camera arrays typically use image-stitching algorithms to generate wide field-of-view panoramas, but parallax and color differences caused by varying viewing angles often result in noticeable artifacts in the stitching result. However, existing solutions can only address specific color difference issues and are ineffective for pinhole images with parallax. To overcome these limitations, we propose a parallax-tolerant weakly supervised pixel-wise deep color correction framework for the image stitching of pinhole camera arrays. The total framework consists of two stages. In the first stage, based on the differences between high-dimensional feature vectors extracted by a convolutional module, a parallax-tolerant color correction network with dynamic loss weights is utilized to adaptively compensate for color differences in overlapping regions. In the second stage, we introduce a gradient-based Markov Random Field inference strategy for correction coefficients of non-overlapping regions to harmonize non-overlapping regions with overlapping regions. Additionally, we innovatively propose an evaluation metric called Color Differences Across the Seam to quantitatively measure the naturalness of transitions across the composition seam. Comparative experiments conducted on popular datasets and authentic images demonstrate that our approach outperforms existing solutions in both qualitative and quantitative evaluations, effectively eliminating visible artifacts and producing natural-looking composite images.
Read full abstract