Abstract

This paper presents a technology for collecting datasets using virtual reality for semantic image segmentation-based landing point recognition methods. The virtual reality scenario assumed that a landing point image was obtained from a vertical take-off and landing unmanned aerial vehicle equipped with a downward-facing camera. A semantic image segmentation technique utilizing a deep neural network of the U-Net structure was implemented. The deep neural network was then trained to recognize both the landing points and obstacles. During the deep neural network training phase, only the datasets collected using the automatic labeling technology in virtual reality were utilized to analyze whether it was feasible to recognize the landing points and obstacles in actual environmental images. The results of the analysis confirmed that the trained deep neural network exhibited meaningful performance using only the datasets collected from a virtual environment similar to the actual environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.