Abstract

An improved loss function free of sampling procedures is proposed to improve the ill‐performed classification by sample shortage. Adjustable parameters are used to expand the loss scope, minimize the weight of easily classified samples, and further substitute the sampling function, which are added to the cross‐entropy loss and the SoftMax loss. Experiment results indicate that improvements in all classification performance of our loss function are shown in various network architectures and on different datasets. To summarize, compared with traditional loss functions, our improved version not only elevates classification performance but also lowers the difficulty of network training.

Highlights

  • Loss function is used to measure the difference between the output data of the model and the actual sample data and its role is to guide the model to move towards convergence in the training process, during which minimizing the loss value is virtually to achieve model fitting of training data and the minimum test error of the model and eventually to accurately classify new samples [1]

  • Margin loss based on triplet employs distance weighted sampling method resulting in omission of relevant samples in the sampling course. erefore, this paper explores the direct reduction of the weight of classified samples free of any sampling form to deal with the issue of sample losses

  • We proposed a loss function for reducing the difficulty of training

Read more

Summary

Introduction

Loss function is used to measure the difference between the output data of the model and the actual sample data and its role is to guide the model to move towards convergence in the training process, during which minimizing the loss value is virtually to achieve model fitting of training data and the minimum test error of the model and eventually to accurately classify new samples [1]. For some classified image datasets, it is adequate to only ensure accurate classification of the known categories. When it comes to fine image classification, adopting SoftMax alone is far from enough. Simple metric learning methods, such as DeepID2, are designed to gain features by combining SoftMax loss and contrastive loss [7], while the renowned Facenet further employs triplet loss. E training difficulty is lower in our loss function and a large volume of computation employed by metric methods is avoided. E innovative points of our research are listed as Computational Intelligence and Neuroscience follows: (1) propose a new loss function, (2) employ the new version to successfully lower training difficulty, and (3) apply the new version to deliver better classification performance compared with traditional loss functions.

Related Work
Improved Loss Function
Experiments
Discussions
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.