Abstract

Autonomous unmanned aircraft need a good semantic understanding of their surroundings to plan safe routes or to find safe landing sites, for example, by means of a semantic segmentation of an image stream. Currently, Neural Networks often give state-of-the-art results on semantic segmentation tasks but need a huge amount of diverse training data to achieve these results. In aviation, this amount of data is hard to acquire but the usage of synthetic data from game engines could solve this problem. However, related work, e.g., in the automotive sector, shows a performance drop when applying these models to real images. In this work, the usage of synthetic training data for semantic segmentation of the environment from a UAV perspective is investigated. A real image dataset from a UAV perspective is stylistically replicated in a game engine and images are extracted to train a Neural Network. The evaluation is carried out on real images and shows that training on synthetic images alone is not sufficient but that when fine-tuning the model, they can reduce the amount of real data needed for training significantly. This research shows that synthetic images may be a promising direction to bring Neural Networks for environment perception into aerospace applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.