Abstract
Engineering data are often highly nonlinear and contain high-frequency noise, so the Levenberg–Marquardt (LM) algorithm may not converge when a neural network optimized by the algorithm is trained with engineering data. In this work, we analyzed the reasons for the LM neural network’s poor convergence commonly associated with the LM algorithm. Specifically, the effects of different activation functions such as Sigmoid, Tanh, Rectified Linear Unit (RELU) and Parametric Rectified Linear Unit (PRLU) were evaluated on the general performance of LM neural networks, and special values of LM neural network parameters were found that could make the LM algorithm converge poorly. We proposed an adaptive LM (AdaLM) algorithm to solve the problem of the LM algorithm. The algorithm coordinates the descent direction and the descent step by the iteration number, which can prevent falling into the local minimum value and avoid the influence of the parameter state of LM neural networks. We compared the AdaLM algorithm with the traditional LM algorithm and its variants in terms of accuracy and speed in the context of testing common datasets and aero-engine data, and the results verified the effectiveness of the AdaLM algorithm.
Highlights
When applied to real-world data interspersed with high nonlinearity and highfrequency noise, LM neural networks have irreplaceable advantages
This work can guide researchers to carry out some appropriate strategies such as training data preprocessing and weights intervention at the beginning of training to avoid problems in the use of the LM neural networks when they still insist on using this model
This study proposed a new solution to the problem of LM neural networks: the adaptive LM (AdaLM) algorithm
Summary
When applied to real-world data interspersed with high nonlinearity and highfrequency noise, LM neural networks have irreplaceable advantages. The activation functions in the neural networks are not necessarily continuous and differentiable, so the above references are only valid in the current setting without proving that the global optimization can be achieved in the neural network model. We explain in detail the specific factors that cause the cost function to fall into bad local minima by the original LM algorithm, by analyzing the output performance of several activation functions. In view of these factors, the new algorithm makes up for the deficiency of the LM algorithm to train a network efficiently. Given a neural network model f(w), the cost function is the least squares problem: F(w) = Σ(ylabel − f (w)).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.