Abstract

Past few years have witnessed an exponential growth of interest in deep learning methodologies with rapidly improving accuracy and reduced computational complexity. In particular, architectures using Convolutional Neural Networks (CNNs) have produced state-of-the-art performances for image classification and object recognition tasks. Recently, Capsule Networks (CapsNets) achieved a significant increase in performance by addressing an inherent limitation of CNNs in encoding pose and deformation. Inspired by such an advancement, we propose Multi-level Dense Capsule Networks (multi-level DCNets). The proposed framework customizes CapsNet by adding multi-level capsules and replacing the standard convolutional layers with densely connected convolutions. A single-level DCNet essentially adds a deeper convolution network, which leads to learning of discriminative feature maps learned by different layers to form the primary capsules. Additionally, multi-level capsule networks uses a hierarchical architecture to learn new capsules from former capsules that represent spatial information in a fine-to-coarser manner, which makes it more efficient for learning complex data. Experiments on image classification task using benchmark datasets demonstrate the efficacy of the proposed architectures. DCNet achieves state-of-the-art performance (99.75%) on the MNIST dataset with approximately twenty-fold decrease in total training iterations, over the conventional CapsNet. Furthermore, multi-level DCNet performs better than CapsNet on SVHN dataset (96.90%), and outperforms the ensemble of seven CapsNet models on CIFAR-10 by \(+\)0.31% with seven-fold decrease in the number of parameters. Source codes, models and figures are available at https://github.com/ssrp/Multi-level-DCNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call