Abstract

In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call