Abstract

In this study, we developed a method for generating omnidirectional depth images from corresponding omnidirectional RGB images of streetscapes by learning each pair of omnidirectional RGB and depth images created by computer graphics using pix2pix. Then, the models trained with different series of images shot under different site and weather conditions were applied to Google street view images to generate depth images. The validity of the generated depth images was then evaluated visually. In addition, we conducted experiments to evaluate Google street view images using multiple participants. We constructed a model that estimates the evaluation value of these images with and without the depth images using the learning-to-rank method with deep convolutional neural network. The results demonstrate the extent to which the generalization performance of the streetscape evaluation model changes depending on the presence or absence of depth images.

Highlights

  • In the fields of architecture and urban planning, analysis is commonly performed to evaluate the impression of space

  • We developed a method for generating omnidirectional depth images from corresponding omnidirectional RGB images of streetscapes by learning each pair of omnidirectional RGB and depth images created by computer graphics (CG) using pix2pix (Isola et al, 2017), a general-purpose image conversion method based on deep learning

  • The models trained with different series of images shot under different site and weather conditions were applied to Google Street View (GSV) images to generate depth images

Read more

Summary

INTRODUCTION

In the fields of architecture and urban planning, analysis is commonly performed to evaluate the impression of space. Liu et al (2017) and Law et al (2017) conducted studies modeling urban landscape evaluation and classification using DCNNs with a large quantity of GSV images Their studies used RGB images with a normal angle of view. Based on the above, Takizawa and Furuta (2017) shot a large number of omnidirectional RGB images and depth images in a virtual urban space constructed with the game engine Unity[1] (see Figure 1). Using these images as input data, they constructed an evaluation model to estimate the results of another evaluation experiment of a virtual urban landscape with a DCNN.

RESULTS
RESULT
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.