Abstract

We propose a new form of plausible counterfactual explanation designed to explain the behaviour of computer vision systems used in urban analytics that make predictions based on properties across the entire image, rather than specific regions of it. We illustrate the merits of our approach by explaining computer vision models used to analyse street imagery, which are now widely used in GeoAI and urban analytics. Such explanations are important in urban analytics as researchers and practioners are increasingly reliant on it for decision making. Finally, we perform a user study that demonstrate our approach can be used by non-expert users, who might not be machine learning experts, to be more confident and to better understand the behaviour of image-based classifiers/regressors for street view analysis. Furthermore, the method can potentially be used as an engagement tool to visualise how public spaces can plausibly look like. The limited realism of the counterfactuals is a concern which we hope to improve in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call