Abstract

Machine learning-based models for object detection heavily rely on large datasets of labeled images. When models trained on these datasets are applied to Unmanned Aerial Vehicle (UAV) imagery, the problem arises that the conditions under which the training images were created (lighting, altitude, angle) may be different to the UAVs applied conditions, leading to misclassifications. This problem becomes even more pressing in safety critical applications where failures can have huge negative impacts and constitute obstacles for certification of cognitive UAV components. Along a case study on car detection in low-altitude aerial imagery, we show that using, both, artificial and real images for model training has a positive effect on the performance of object detection algorithms when the trained model is applied on images from another domain. Additionally, we show that weak points from object detection neural networks trained on real-world images transfer to synthetic images and that synthetic data can be used to evaluate neural networks trained on real-world data. Since simulated images are easy to create and object labels are inherently given, the presented approaches show a promising direction for scenarios where adequate datasets are difficult to obtain, as well as for the targeted exploration of weak points of object detection algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call