Abstract
Abstract During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.