Abstract
A method is presented for scene detection and estimation using high-resolution imagery acquired through autonomous drone navigation aided with landmark detection and recognition. The proposed system comprises a drone platform that facilitates efficient autonomous flight; it can capture images and provide real-time video streaming of the ground cover using a camera equipped with a 14-megapixel CMOS sensor and a fish-eye lens. In addition, landmark detection and recognition was performed by applying the histogram of oriented gradients and linear support vector machine methods on each frame of the video stream. The high spatial resolution of the acquired drone images makes the detection and interpretation of environments less complicated. First, through image processing, orthomosaic images and 3-D environment reconstruction (point clouds) of the scene are generated from a set of drone images by using an automatic photogrammetric technique called “structure from motion.” Subsequently, an unsupervised classification method is used to detect and differentiate environmental classes (scene interpretation) in the target or investigated area by using the high-resolution images. Finally, the results of the proposed method are evaluated by comparing them against ground-truth points.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Systems, Man, and Cybernetics: Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.