Abstract
Human mobility analytics using artificial intelligence (AI) has gained significant attention with advancements in computational power and the availability of high-resolution spatial data. However, the application of deep learning in social sciences and human geography remains limited, primarily due to concerns with model explainability. In this study, we employ an explainable GeoAI approach called geographically localized interpretable model-agnostic explanation (GLIME) to explore human mobility patterns over large spatial and temporal extents. Specifically, we develop a two-layered long short-term memory (LSTM) model capable of predicting individual-level residential mobility patterns across the United States from 2012 to 2019. We leverage GLIME to provide geographical perspectives and interpret deep neural networks at the state level. The results reveal that GLIME enables spatially explicit interpretations of local impacts attributed to different variables. Our findings underscore the significance of considering path dependency in residential mobility dynamics. While the prediction of complex human spatial decision-making processes still presents challenges, this research demonstrates the utility of deep neural networks and explainable GeoAI to support human dynamics understanding. It sets the stage for further finely tuned investigations in the future, promising deep insights into intricate mobility phenomena.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Geographical Information Science
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.