Abstract

Despite the great success of deep neural networks in many fields, the lack of interpretability has severely limited their wide application in security-sensitive tasks. Although current interpretable methods of deep neural networks such as visualization, class activation mapping, and sensitivity analysis can help users intuitively understand the inner working mechanism of the neural networks to some extent, they are either too coarse in explanation or too complex in explanation form to read easily. In order to use semantic information which is more understandable and close to human thought to interpret the deep neural network and increase the readability of the interpretation,we propose a semantic interpretation method of deep neural network based on knowledge graph. The method takes the VGG16 network as an example, and by mining the key neuronsof the neural network, construct semantic dictionaries and knowledge maps of key neurons, and automatically generate human-understandable semantic explanatory statements based on the knowledge maps. The method provides a new idea to improve the transparency of the operation process of deep neural networks, and also provides a clearer reference basis for pruning and tuning of deep neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call