Abstract

Feature normalization has been a crucial step in convolutional neural networks (CNNs) in the past few years. Discriminative feature abstraction is indispensable for boosting the overall performance of learning models. For 3D data, in both point cloud and mesh models, the inner product (cosine similarity) is frequently applied for similarity estimation without normalization. In this paper, we first demonstrate that softmax loss will produce a lower bound and cause a convergence problem if we directly perform feature comparison by cosine similarity. Second, we propose a revised formulation of the softmax loss function that uses an optimized scaled cosine similarity with normalization in the training phase rather than the inner product of two vectors, allowing the network to converge correctly. Experiments show significant improvements over previous methods on several datasets. The SHREC dataset with split16 and split10 achieved 99.16% and 97.66% accuracy, respectively, and the Cube dataset achieved 96.96% accuracy, surpassing previous methods in a mesh classification task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call