Abstract
Recently, there is an increasing demand for small drones owing to their small size and agility in complex indoor environments. Accordingly, safety issues for navigating small drones become of significant importance. For drones to be able to navigate safely through complex environments, it would be useful to estimate accurate time-to-collision (TTC) to obstacles. To this end, in this paper, we propose a deep learning-based TTC estimation algorithm. To train generalizable neural networks for TTC estimation, large datasets including collision cases are needed. However, in real-world environments, it is impractical and infeasible to collide drones with obstacles to collect a significant amount of data. Simulation environments could facilitate the data acquisition procedure, but the data from simulated environments could be quite different from those of real environments, which is commonly termed as reality-gap. In this study, to reduce this reality-gap, sim-to-real methods based on a variant of the generative adversarial network are used to convert simulated images into real world-like synthetic images. Besides, to consider the uncertainties that come from using the synthetic dataset, the aleatoric loss function and Monte Carlo dropout method are employed. Furthermore, we improve the performance of the deep learning-based TTC estimation algorithm by replacing conventional convolutional neural networks (CNNs) with convolutional long short-term memory (ConvLSTM) layers which are known to be better at handling time-series data than CNNs. To validate the performance of the proposed framework, real flight experiments have been carried out in various indoor environments. Our proposed framework decreases the average TTC estimation error by 0.21 s compared with the baseline approach with CNNs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.