Abstract

Capsule networks overcome the two drawbacks of convolutional neural networks: weak rotated object recognition and poor spatial discrimination. However, they still have encountered problems with complex images, including high computational cost and limited accuracy. To address these challenges, this work has developed effective solutions. Specifically, a novel windowed dynamic up-and-down attention routing process is first introduced, which can effectively reduce the computational complexity from quadratic to linear order. A novel deconvolution-based decoder is also used to further reduce the computational complexity. Then, a novel LayerNorm strategy is used to pre-process neuron values in the squash function. This prevents saturation and mitigates the gradient vanishing problem. In addition, a novel gradient-friendly network structure is developed to facilitate the extraction of complex features with deeper networks. Experiments show that our methods are effective and competitive, outperforming existing techniques.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.