Abstract

A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the last layer, which severely attenuates the gradient during the backpropagation process, we propose a hierarchical polynomial kernel prototype loss function in this study. In this function, the addition of a polynomial kernel loss function to multiple stages of the deep neural network effectively enhances the efficiency of gradient return, and only adds multi-layer prototype loss functions in the training stage without increasing the calculation of the inference stage. In addition, the good non-linear expression ability of the polynomial kernel improves the characteristic expression performance of the network. Verification on multiple public datasets shows that the lightweight network trained with the proposed hierarchical polynomial kernel loss function has a higher accuracy than other loss functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call