Abstract

Convolutional neural networks (CNNs) are successful in many different applications, however, such model decisions can be easily changed by slight modification on the inputs. The robustness needs to be guaranteed for the safety critical fields like medicine, therefore, it is necessary to understand the decision making procedure of CNN models. As the CNN model automatically extracts the image features and makes the corresponding predictions, observing the learned features space can approximately represent the decision boundary. In this paper, the use of linear interpolation to monitor the learned feature space is applied to analyze the separability property of a CNN model at different classes. By forcing the CNN to learn to separate the extracted features at different layer depths by adding the conformity loss, the classification distribution was more separable and stable to enhance the robustness of the model. The performance of linear interpolation showed the model had better classification abilities, where there are fewer perturbed classes appearing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call