Abstract

In order to provide urban residents with suitable living conditions, it is essential to keep track of the liveability of neighbourhoods. This is traditionally done through surveys and by predictive modelling. However, surveying on a large scale is expensive and hard to repeat. Recent research has shown that deep learning models trained on remote sensing images may be used to predict liveability. In this paper we study how well a model can predict liveability from aerial images by first predicting a set of intermediate domain scores. Our results suggest that our semantic bottleneck model performs equally well to a model that is trained only to predict liveability. Secondly, our model extrapolates well to unseen regions (R2 between 0.45 and 0.75, Kendall’s τ between 0.39 and 0.57), even to regions with an urban developmental context that is different from areas seen during training. Our results also suggest that domains which are directly visible within the aerial image patches (physical environment, buildings) are easier to generalize than domains which can only be predicted through proxies (population, safety, amenities). We also test our model’s perception of different neighbourhood typologies, from which we conclude that our model is able to predict the liveability of neighbourhood typologies though with a varying accuracy. Overall, our results suggest that remote sensing can be used to extrapolate liveability surveys and their related domains to new and unseen regions within the same cultural and policy context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call