Abstract
Machine learning and artificial intelligence (ML/AI), previously considered black box approaches, are becoming more interpretable, as a result of the recent advances in eXplainable AI (XAI). In particular, local interpretation methods such as SHAP (SHapley Additive exPlanations) offer the opportunity to flexibly model, interpret and visualise complex geographical phenomena and processes. In this paper, we use SHAP to interpret XGBoost (eXtreme Gradient Boosting) as an example to demonstrate how to extract spatial effects from machine learning models. We conduct simulation experiments that compare SHAP-explained XGBoost to Spatial Lag Model (SLM) and Multi-scale Geographically Weighted Regression (MGWR) at the parameter level. Results show that XGBoost estimates similar spatial effects as those in SLM and MGWR models. An empirical example of Chicago ride-hailing modelling is presented to demonstrate the utility of SHAP with real datasets. Examples and evidence in this paper suggest that locally interpreted machine learning models are good alternatives to spatial statistical models and perform better when complex spatial and non-spatial effects (e.g. non-linearities, interactions) co-exist and are unknown.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.