Abstract
AbstractSiamese networks excel at comparing two images, serving as an effective class verification technique for a single‐per‐class reference image. However, when multiple reference images are present, Siamese verification necessitates multiple comparisons and aggregation, often unpractical at inference. The Centre‐Loss approach, proposed in this research, solves a class verification task more efficiently, using a single forward‐pass during inference, than sample‐to‐sample approaches. Optimising a Centre‐Loss function learns class centres and minimises intra‐class distances in latent space. The authors compared verification accuracy using Centre‐Loss against aggregated Siamese when other hyperparameters (such as neural network backbone and distance type) are the same. Experiments were performed to contrast the ubiquitous Euclidean against other distance types to discover the optimum Centre‐Loss layer, its size, and Centre‐Loss weight. In optimal architecture, the Centre‐Loss layer is connected to the penultimate layer, calculates Euclidean distance, and its size depends on distance type. The Centre‐Loss method was validated on the Self‐Checkout products and Fruits 360 image datasets. Centre‐Loss comparable accuracy and lesser complexity make it a preferred approach over sample‐to‐sample for the class verification task, when the number of reference image per class is high and inference speed is a factor, such as in self‐checkouts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.