Abstract

Convolutional neural networks have made unprecedented breakthroughs in various tasks of computer vision. Due to its complex nonlinear model structure and the high latitude and complexity of data distribution, it has been criticized as an unexplained “black box”. Therefore, explaining the neural network model and uncovering the veil of the neural network have become the focus of attention. This paper starts with the term “interpretability”, summarizes the results of the interpretability of convolutional neural networks in the past three years (2016–2018), and analyses them with interpretable methods. Firstly, the concept of “interpretability” is introduced. Then the existing research achievements are classified and compared from four aspects, data characteristics and rule processing, model internal spatial analysis, interpretation and prediction, and model interpretation. Finally pointed out the possible research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call