Abstract

We investigate classification performance of neural networks (NNs) based on topological insight in an attempt to guarantee stability of their inference. NNs which can accurately classify a dataset map it into a hidden space while disentangling intertwined data. NNs sometimes acquire forcible mapping to disentangle the data, and this forcible mapping generates outliers. The mapping around the outliers is unstable because the outputs change drastically. Hence, we define stable NNs to mean that they do not generate outliers. To investigate the possibility of the existence of outliers, we use persistent homology and a method to estimate the confidence set for persistence diagrams. The combined use enables us to test whether the focused geometry is topologically simple, that is, no outliers. In this work, we use the MNIST and CIFAR-10 datasets and investigate the relationship between the classification performance and the topological characteristics with several NNs. Investigation results with the MNIST dataset show that the test accuracy of all the networks is superior, exceeding 98%, even though the transformed dataset is not topologically simple. Results with the CIFAR-10 dataset also show that the possibility of the existence of outliers is shown in the mapping by the accurate convolutional NNs. Therefore, we conclude that the presented investigation is necessary to guarantee that the NNs, in particular deep NNs, do not acquire unstable mapping for forcible classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call