Abstract

The residual neural network is prone to two problems when it is used in the process of face recognition: the first is "overfitting", and the other is the slow or non-convergence problem of the loss function of the network in the later stage of training. In this paper, in order to solve the problem of "overfitting", this paper increases the number of training samples by adding Gaussian noise and salt and pepper noise to the original image to achieve the purpose of enhancing the data, and then we added "dropout" to the network, which can improve the generalization ability of the network. In addition, we have improved the loss function and optimization algorithm of the network. After analyzing the three loss functions of Softmax, center, and triplet, we consider their advantages and disadvantages, and propose a joint loss function. Then, for the optimization algorithm that is widely used through the network at present, that is the Adam algorithm, although its convergence speed is relatively fast, but the convergence results are not necessarily satisfactory. According to the characteristics of the sample iteration of the convolutional neural network during the training process, in this paper, the memory factor and momentum ideas are introduced into the Adam optimization algorithm. This can increase the speed of network convergence and improve the effect of convergence. Finally, this paper conducted simulation experiments on the data-enhanced ORL face database and Yale face database, which proved the feasibility of the method proposed in this paper. Finally, this paper compares the time-consuming and power consumption of network training before and after the improvement on the CMU_PIE database, and comprehensively analyzes their performance.

Highlights

  • The groundbreaking research on neural networks can be traced back to the 1980s

  • This paper introduces the loss function through the understanding of the training process of the convolutional neural network

  • This paper introduces momentum and memory factor into the Adam optimization algorithm at the same time to ensure that the learning rate keeps a monotonically decreasing trend throughout the training process

Read more

Summary

Introduction

The groundbreaking research on neural networks can be traced back to the 1980s. In 1980, Fukushima of Kyoto University in Japan proposed a deep-structure neural network named "neocognitron" [1] based on the visual cortex. This article prevents this "overfitting" of the network from two aspects. We introduce “dropout” into the network to improve the generalization ability of the network

Enhance Face Data
Dropout
Training of Neural Network
Softmax Loss
Center Loss
Triplet Loss
Joint loss Function
Optimization Algorithm
Adam Algorithm
Adam Optimization Algorithm with Memory Factor
Adam Optimization Algorithm with Momentum
Improved Adam Optimization Algorithm
Improved Residual Network
Simulation Experiment
Conclusion
Findings
Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call