Abstract

When we recognize images with the help of Artificial Neural Networks (ANNs), we often wonder how they make decisions. A widely accepted solution is to point out local features as decisive evidence. A question then arises: Can local features in the latent space of an ANN explain the model output to some extent? In this work, we propose a modularized framework named MemeNet that can construct a reliable surrogate from a Convolutional Neural Network (CNN) without changing its perception. Inspired by the idea of time series classification, this framework recognizes images in two steps. First, local representations named memes are extracted from the activation map of a CNN model. Then an image is transformed into a series of understandable features. Experimental results show that MemeNet can achieve accuracy comparable to most models' through a set of reliable features and a simple classifier. Thus, it is a promising interface to use the internal dynamics of CNN, which represents a novel approach to constructing reliable models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call