Abstract

This paper presents a technology for collecting datasets using virtual reality for semantic image segmentation-based landing point recognition methods. The virtual reality scenario assumed that a landing point image was obtained from a vertical take-off and landing unmanned aerial vehicle equipped with a downward-facing camera. A semantic image segmentation technique utilizing a deep neural network of the U-Net structure was implemented. The deep neural network was then trained to recognize both the landing points and obstacles. During the deep neural network training phase, only the datasets collected using the automatic labeling technology in virtual reality were utilized to analyze whether it was feasible to recognize the landing points and obstacles in actual environmental images. The results of the analysis confirmed that the trained deep neural network exhibited meaningful performance using only the datasets collected from a virtual environment similar to the actual environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call