Abstract

Image stitching is the process of combining multiple overlapping images of different views to produce a high-resolution image. The aerial perspective or top view of the terrestial scenes will not be available in the generic 2D images captured by optical cameras. Thus, stitching using 2D images will result in lack of information in top view. UAV (Unmanned Aerial Vehicle) captured drone images tend to have the high aerial perspective, 50–80% of overlapping of information between the images with full information about the scene. This work comprises of discussion about methods such as feature extraction and feature matching used for drone image stitching. In this paper, we compare the performance of three different feature extraction techniques such as SIFT (Scale-Invariant Feature Transform), SURF (Speeded-Up Robust Features), and ORB (ORiented FAST and rotated BRIEF) for detecting the key features. Then the detected features are matched using feature matching algorithms such as FLANN (Fast Library for Approximate Nearest Neighbors) and BF (Brute Force). All the matched key points may not be useful for creating panoramic image. Further, RANSAC (Random sample consensus) algorithm is applied to separate the inliers from the outlier set and interesting points are obtained to create a high-resolution image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call