Abstract

We present a new aerial image dataset, named ESPADA, intended for the training of deep neural networks for depth image estimation from a single aerial image. Given the difficulty of creating aerial image datasets containing image pairs of chromatic images related to their depth images, simulators such as AirSim have been proposed to generate synthetic images from photorealistic scenes. The latter enables the generation of thousands of images that can be used to train and evaluate neural models. However, we argue that synthetic photorealistic aerial image datasets can be improved by adding images generated from photogrammetric models imported into the simulator, thus enabling a less artificial generation of both chromatic and depth images. To assess the quality of these images, we compare the performance of 4 deep neural networks whose pre-trained models and code for re-training are publicly available. We also use ORB-SLAM, in its RGB-D version, to indirectly assess the estimated depth image. To accomplish this, chromatic images from 3 aerial videos and their depth images, estimated with the networks trained with ESPADA, are fed into ORB-SLAM. The estimated camera pose is compared against the trajectory retrieved from the GPS flight trajectory. Our results indicate that images generated from photogrammetric models improve the performance of depth estimation from a single aerial image.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.