Abstract
Unmanned aerial vehicle (UAV) remote sensing technology has come into wide use in recent years. The poor stability of the UAV platform, however, produces more inconsistencies in hue and illumination among UAV images than other more stable platforms. Image dodging is a process used to reduce these inconsistencies caused by different imaging conditions. We propose an algorithm for automatic image dodging of UAV images using two-dimensional radiometric spatial attributes. We use object-level image smoothing to smooth foreground objects in images and acquire an overall reference background image by relative radiometric correction. We apply the Contourlet transform to separate high- and low-frequency sections for every single image, and replace the low-frequency section with the low-frequency section extracted from the corresponding region in the overall reference background image. We apply the inverse Contourlet transform to reconstruct the final dodged images. In this process, a single image must be split into reasonable block sizes with overlaps due to large pixel size. Experimental mosaic results show that our proposed method reduces the uneven distribution of hue and illumination. Moreover, it effectively eliminates dark-bright interstrip effects caused by shadows and vignetting in UAV images while maximally protecting image texture information.
Highlights
Remote sensing images are usually acquired under dissimilar imaging conditions, such as different periods, different illumination intensities, and different sensor angles
Image dodging is important when dealing with unmanned aerial vehicle (UAV) images collected from fixed-wing UAVs without gimbals, from different solar elevations, or from multiple flights under varying weather conditions
After carefully analyzing the original image, we found that when the UAV flies in different strips, different amounts of shadow casting objects were captured on different sides of the image because of occlusion, causing uneven brightness distribution within a single image
Summary
Remote sensing images are usually acquired under dissimilar imaging conditions, such as different periods, different illumination intensities, and different sensor angles. To address the problems in histogram matching, linear modeling has received a great deal of attention.[12,13,14,15,16] In these approaches, the combined value of hue and illumination variation among images is estimated statistically from pixels sampled from overlapping areas of several images This value is used to reduce the differences among the images; it does not represent the true gray difference among the images. The linear statistical method based on mean and SD is based on the idea that two images have a “least mean squares” sense of gray difference, but this approach reduces local contrast in images to be processed These methods were proposed for satellite imagery or traditional aerial images. The foreground bright and dark objects are smoothed before the overall mean difference is obtained to reduce adverse effects for average difference acquisition
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.