Abstract
Various research applications require detailed metrics to describe the form and composition of cities at fine scales, but the parameter computation remains a challenge due to limited data availability, quality, and processing capabilities. We developed an innovative big data approach to derive street-level morphology and urban feature composition as experienced by a pedestrian from Google Street View (GSV) imagery. We employed a scalable deep learning framework to segment 90-degree field of view GSV image cubes into six classes: sky, trees, buildings, impervious surfaces, pervious surfaces, and non-permanent objects. We increased the classification accuracy by differentiating between three view directions (lateral, down, and up) and by introducing a void class as training label. To model the urban environment as perceived by a pedestrian in a street canyon, we projected the segmented image cubes onto spheres and evaluated the fraction of each surface class on the sphere. To demonstrate the application of our approach, we analyzed the urban form and composition of Philadelphia County and three Philadelphia neighborhoods (suburb, center city, lower income neighborhood) using stacked area graphs. Our method is fully scalable to other geographic locations and constitutes an important step towards building a global morphological database to describe the form and composition of cities from a human-centric perspective.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.