Abstract

Fisher kernels derived from stochastic probabilistic models such as restricted and deep Boltzmann machines have shown competitive visual classification results in comparison to widely popular deep discriminative models. This genre of Fisher kernels bridges the gap between shallow and deep learning paradigm by inducing the characteristics of deep architecture into Fisher kernel, further deployed for classification in discriminative classifiers. Despite their success, the memory and computational costs of Fisher vectors do not make them amenable for large-scale visual retrieval and classification tasks. This study introduces a novel feature selection technique inspired from the functional characteristics of neural architectures for learning discriminative feature representations to boost the performance of Fisher kernels against deep discriminative models. The proposed technique condenses the large dimensional Fisher features for kernel learning and shows improvement in its classification performance and storage cost on leading benchmark data sets. A comparison of the proposed method with other state-of-the-art feature selection techniques is made to demonstrate its performance supremacy as well as time complexity required to learn in reduced Fisher space.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call