Abstract

The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Residual Capsule Network [15] has put the Residual Network and Capsule Network together. Though it did well on simple dataset such as MNIST, the architecture can be improved to do better on complex datasets like CIFAR-10. This brings us to the idea of 3-Level Residual Capsule which not only decreases the number of parameters when compared to the seven-ensemble model, but also performs better on complex datasets when compared to Residual Capsule Network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call