Abstract

Convolutional neural networks (CNNs) have brought in exciting progress in many computer vision tasks. But the feature extraction process executed by CNN still keeps a black box to us, and we have not fully understood its working mechanism. In this paper, we propose a method to evaluate CNN features and further to analyze the CNN feature extractor, which is inspired by Bayes Classification Theory and KL divergence (KLD). Experiments have shown that CNN can promote feature discrimativeness by gradually increasing the intra-class KLD, and meanwhile promote feature robustness by gradually decreasing the inner-class KLD during training. Experiments also reveal that, with the deepening of network, CNN can gradually improve separability information density in feature space and encode much more separability information into the final feature vectors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call