Abstract

Deep neural networks (DNN), such as convolutional neural networks (CNN) have been widely used for object recognition. However, they are usually unable to ensure the required intra-class compactness and inter-class separability in the kernel space. These are known to be important in pattern recognition for achieving both robustness and accuracy. In this paper, we propose to integrate a kernelized Min-Max objective in the DNN training in order to explicitly enforce both kernelized within-class compactness and between-class margin. The involved kernel space is implicitly mapped from the feature space associated with a certain upper layer of DNN by exploiting a kernel trick, while the Min-Max objective in this space is interpolated with the original DNN loss function and finally optimized in the training phase. With a very small additional computation cost, the proposed strategy can be easily integrated in different DNN models without changing any other part of the original model. The comparative recognition accuracy of the proposed method is evaluated with multiple DNN models (including shallow CNN, deep CNN and deep residual neural network models) on two benchmark datasets: CIFAR-10 and CIFAR-100. Extensive experimental results demonstrate that the integration of kernelized Min-Max objective in the training of DNN models can achieve better results compared to state-of-the-art models, without incurring additional model complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call