Abstract
Capsule Network (CapsNet) has motivated researchers to work on it due to its distinct capability of retaining spatial correlations between image features. However, its applicability is still limited because of its intensive computational cost, memory usage and bandwidth requirement. This paper proposes a computationally efficient, lightweight CapsNet which paves its way forward for deployment in constrained edge devices as well as in web based applications. The proposed framework consists of Capsule layers and a deep feature representation layer as an input for capsules. The deep feature representation layer comprises of a series of feature blocks, containing convolution with a 3 × 3 kernel followed by batch normalization and convolution with a 1 × 1 kernel. The deeper or better represented input features help to improve recognition performance even with lesser number of capsules, making the network computationally more efficient. The efficacy of the proposed framework is validated by performing rigorous experimental studies on different datasets, such as CIFAR-10, FMNIST, MNIST and SVHN which include images of object classes as well as text characters. A comparative analysis has also been done with the state-of-the-art technique CapsNet. The comparison with recognition accuracy ensures that, the proposed architecture with deep input features provides more efficient routing between the capsules as compared to CapsNet. The proposed lightweight network has scaled down the number of parameters up to 60% of CapsNet, which is another significant contribution. This is achieved by collaborative effect of deep feature generation module and parametric changes performed in the primary capsule layer.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.