Abstract
The mosaicking of Unmanned Aerial Vehicle (UAV) imagery usually requires information from additional sensors, such as Global Position System (GPS) and Inertial Measurement Unit (IMU), to facilitate direct orientation, or 3D reconstruction approaches (e.g., structure-from-motion) to recover the camera poses. In this paper, we propose a novel mosaicking method for UAV imagery in which neither direct nor indirect orientation procedures are required. Inspired by the embedded deformation model, a widely used non-rigid mesh deformation model, we present a novel objective function for image mosaicking. Firstly, we construct a feature correspondence energy term that minimizes the sum of the squared distances between matched feature pairs to align the images geometrically. Secondly, we model a regularization term that constrains the image transformation parameters directly by keeping all transformations as rigid as possible to avoid global distortion in the final mosaic. Experimental results presented herein demonstrate that the accuracy of our method is twice as high as an existing (purely image-based) approach, with the associated benefits of significantly faster processing times and improved robustness with respect to reference image selection.
Highlights
A huge market is currently emerging from the vast number of potential applications and services offered by small, low-cost, and low-flying unmanned aerial vehicles (UAVs)
We propose a novel image mosaicking method without the requirements of camera calibration parameters, camera poses, or any 3D reconstruction procedure
Based on the method of Sumner et al, we introduce the local rigid deformation constraint to the problem of UAV image mosaicking to largely preserve the original shape of the objects in image
Summary
A huge market is currently emerging from the vast number of potential applications and services offered by small, low-cost, and low-flying unmanned aerial vehicles (UAVs). UAVs can carry payloads such as cameras, infrared cameras, and other sensors. They enable us to obtain a synoptic view of an area, which is helpful in applications such as surveillance and reconnaissance, environmental monitoring, disaster assessment and management (see, e.g., [1,2,3]). Mosaicking of UAV imagery usually requires extra information, such as camera calibration parameters, position and orientation data from GPS/IMU, ground control points (GCPs) or a reference map, to achieve accurate mosaicking results [1,2,3,4,5,6,7]. When GPS/IMU data are not accurate enough for direct orientation determination, pose estimation using a 3D reconstruction method is usually employed to refine the camera
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.