Abstract

The fields of architecture and urban planning widely apply spatial analysis based on images. However, many features can influence the spatial conditions, not all of which can be explicitly defined. In this research, we propose a new deep learning framework for extracting spatial features without explicitly specifying them and use these features for spatial analysis and prediction. As a first step, we establish a deep convolution neural network (DCNN) learning problem with omnidirectional images that include depth images as well as ordinary RGB images. We then use these images as explanatory variables in a game engine to predict a subjects' preference regarding a virtual urban space. DCNNs learn the relationship between the evaluation result and the omnidirectional camera images and we confirm the prediction accuracy of the verification data.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.