Abstract

Convolutional Neural Networks (CNNs) are one of the most commonly used architectures for image-related deep learning studies. Despite its popularity, CNNs have some intrinsic limitations such as losing some of the spatial information and not being robust to affine transformations due to pooling operations. On the other hand, Capsule Networks are composed of groups of neurons, and with the help of its novel routing algorithms, they have the capability for learning high dimensional pose configuration of the objects as well. In this study, we investigate the performance of brand-new Capsule Networks using dynamic routing algorithm on the clothing classification task. To achieve this, we propose 4-layer stacked-convolutional Capsule Network architecture (FashionCapsNet), and train this model on DeepFashion dataset that contains 290k clothing images over 46 different categories. Thereafter, we compare the category classification results of our proposed design and the other state-of-the-art CNN-based methods trained on DeepFashion dataset. As a result of the experimental study, FashionCapsNet achieves 83.81% top-3 accuracy, and 89.83% top-5 accuracy on the clothing classification. Based upon these figures, FashionCapsNet clearly outperforms the earlier methods that neglect pose configuration, and has comparable performance to the baseline study that utilizes an additional landmark information to recover pose configuration. Finally, in the future, proposed FashionCapsNet may inherit extra performance boost on the clothing classification due to advances in the relatively new Capsule Network research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call