Abstract

Artificial intelligence (AI) techniques have been widely implemented in the domain of autonomous vehicles (AVs). However, existing AI techniques, such as deep learning and ensemble learning, have been criticized for their black-box nature. Explainable AI is an effective methodology to understand the black box and build public trust in AVs. In this paper, a maximum entropy-based Shapley Additive exPlanation (SHAP) is proposed for explaining lane change (LC) decision. Specifically, we first build an LC decision model with high accuracy using eXtreme Gradient Boosting. Then, to explain the model, a modified SHAP method is proposed by introducing a maximum entropy base value. The core of this method is to determine the base value of the LC decision model using the maximum entropy principle, which provides an explanation more consistent with the human intuition. This is because it brings two properties: 1) maximum entropy has a clear physical meaning that quantifies a decision from chaos to certainty, and 2) the sum of the explanations is always isotropic and positive. Furthermore, we develop exhaustive statistical analysis and visualization to present intuitive explanations of the LC decision model. Based on the explanation results, we attribute the causes of predictions with wrong results to model defects or sample sparsity, which provides guidance to users for model optimization.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.