Abstract

Many deep learning models that show improved efficacy over current state-of-the-art models are built using ad-hoc design strategies. In this study, a framework was developed to enhance the explainability of deep learning models. The framework systematically explains each step involved in enhancing existing models so that users can understand, replicate and trust them. A design science research methodology was used to develop the framework to identify ambiguities and knowledge gaps in current approaches. Experimentation enhanced current deep learning models. The results of this study revealed that enhancing state-of-the-art deep learning models for prediction is made possible by using the suggested framework. Furthermore, the steps to achieve this are easy to comprehend. The main contribution of this study is the design of an explainable deep learning framework using a repeatable and understandable strategy that researchers can follow for improving state-of-the-art prediction models. Keywords: Artificial intelligence ethics, time series prediction, irregular sequential patterns, machine learning models and deep learning framework. &nbsp

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call