The vigorous progress in artificial intelligence and the widespread availability of computing power and big data have resulted in remarkable achievements in applying deep learning methods to traffic engineering. However, the complexity of deep learning models poses a significant challenge to their interpretability, despite their scalability to big data. As a result, academia has shown a keen interest in interpreting these intricate models. To address these limitations, this study introduces the Gated Recurrent Convolution Network (GRCN) for predicting spatiotemporal crash risks. Moreover, to transform the opaque nature of the GRCN into a transparent framework, SHapley Additive Explanations (SHAP) was employed, a game theory-based approach. The application of SHAP analysis not only enhances the explainability of the GRCN model but also reveals complex causal relationships regarding crash risks, using extensive datasets encompassing taxi trips data, law enforcement, and weather conditions. The study incorporates both local and global interpretability analyses at the grid level. Additionally, it indicates that intersection red light violations tend to elevate crash risks. The findings further demonstrate that adverse weather conditions, such as low visibility and strong crosswinds, negatively affect driving behavior and correlate with higher crash risks. However, the presence of cloudy skies is associated with a decrease in crash risks. By shedding light on the intricate dynamics of crash risk factors, this study contributes to the field of traffic engineering and emphasizes the importance of interpretability in deep learning models. The insights gained from this research can inform the development of effective interventions and policies aimed at reducing crash risks and improving overall transportation safety.