Abstract

Most analytics models are built on complex internal learning processes and calculations, which might be unintuitive, opaque, and incomprehensible to humans. Analytics-based decisions must be transparent and intuitive to foster greater human acceptability and confidence in analytics. Explainable analytics models are transparent models in which the primary factors and weights that lead to a prediction can be explained. Typical AI models are non-transparent or opaque models, in which even the designers cannot explain how their models arrive at a specific decision. These transparent models help decision-makers understand their judgments and build trust in analytics. This study introduces an innovative, comprehensive model that fuses descriptive, predictive, and prescriptive analytics, offering a fresh perspective on car accident severity. Our methodological contribution lies in the application of advanced techniques to address data-related challenges, optimize feature selection, develop predictive models, and fine-tune parameters. Importantly, we also incorporate model-agnostic interpretation techniques, further enhancing the transparency and interpretability of our model, and separate explanations from models (i.e., model-agnostic interpretation techniques). Our findings should provide novel insights for a domain expert to understand accident severity. The explainable analytics approach suggested in this study supplements non-transparent machine learning prediction models, particularly optimized ensemble models. Our model's end product is a comprehensible representation of crash severity factors. To obtain a more trustworthy assessment of accident severity, this model may be supplemented with insurance data, medical data such as blood work and pulse rate, and previous medical history.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call