Abstract
The continuous increase in interest in indoor space demands the development of LBSs (Location Based Services) for navigation, routing, and space query in these environments. Applications providing these services must use visualization and topology data together to provide relevant services to users. Applications have frequently used geometric object-based visualization models for indoor space LBSs. These have the advantage of being easy to use with IndoorGML, an indoor topology model, but have the disadvantage of high construction cost and heavy data. Correspondingly, the image-based visualization model drew attention as an alternative model. However, using such models requires identifying objects on the image to be used alongside IndoorGML, presenting a limitation because it is difficult to identify objects in the image pixels directly. To overcome the limitations of image-based visualization models and reconsider the usability of indoor space LBS using images, this study presents a method to automatically detect spatial objects required to construct IndoorGML from images using deep learning. This methodology aims to detect objects mainly used in indoor LBSs, which include door and stair objects portrayed in indoor omnidirectional images. This study proposes a detailed method of constructing a training dataset for indoor spatial object detection. Moreover, this research presents a method of refining the training dataset and directly acquiring omnidirectional image data to train it as an object detection model that can be used universally in various buildings while maintaining compliant accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.