Abstract

Convolutional Neural Networks are a very powerful Deep Learning algorithm used in image processing, object classification and segmentation. They are very robust in extracting features from data and largely used in several domains. Nonetheless, they require a large number of training datasets and relations between features get lost in the Max-pooling step, which can lead to a wrong classification. Capsule Networks (CapsNets) were introduced to overcome these limitations by extracting features and their pose using capsules instead of neurons. This technique shows an impressive performance in one-dimensional, two-dimensional and three-dimensional datasets as well as in sparse datasets. In this paper, we present an initial understanding of CapsNets, their concept, structure and learning algorithm. We introduce the progress made by CapsNets from their introduction in 2011 until 2020. We compare different CapsNets series to demonstrate strengths and challenges. Finally, we quote different implementations of Capsule Networks and show their robustness in a variety of domains. This survey provides the state-of-the-art of Capsule Networks and allows other researchers to get a clear view of this new field. Besides, we discuss the open issues and the promising directions of future research, which may lead to a new generation of CapsNets.

Highlights

  • Imitating the human brain used to be a dream for scientists until the creation of Artificial Neural Networks (ANNs)

  • Simulating human brain ability in object classification was the goal of Convolutional Neural Networks (CNNs)

  • The introduction of Capsule Networks was in 2011. They were presented as Transforming AE by (Hinton et al, 2011) who noticed that Convolutional Neural Networks are misguided in what they are trying to achieve

Read more

Summary

INTRODUCTION

Imitating the human brain used to be a dream for scientists until the creation of Artificial Neural Networks (ANNs). Simulating human brain ability in object classification was the goal of Convolutional Neural Networks (CNNs). This type of neural networks shows high performance in object classification and image processing. CNNs are unable to detect object deformation and relationships among object entities These limitations may lead to incorrect classification, influencing the model performance negatively. The network is trained by updating weights using backpropagation with a gradient optimizer This type of network is used for data denoising, dimensionality reduction and as a generative model. The introduction of Capsule Networks was in 2011 They were presented as Transforming AE by (Hinton et al, 2011) who noticed that Convolutional Neural Networks are misguided in what they are trying to achieve.

CONVOLUTIONAL NEURAL NETWORKS
Overview of CNNs
CNNs Shortcomings
CAPSULES NETWORK PROGRESS
Transforming Auto-encoders
Dynamic Routing Between Capsules
Matrix Capsules with EM Routing
Stacked Capsule Auto-encoders
IMPLEMENTATIONS
Advantages
Shortcomings
Findings
FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.