Abstract

Neural network is optimized using the Adam algorithm, which introduces the first-order and second-order momentum of gradient. At the same time, learning rate is dynamically adjusted to different training stages to avoid the network from slow converging or oscillating. The three-layer neural network model is trained with a 5000 samples data set for HSLA steel performance prediction. The result shows that: compared with the traditional SGD algorithm, Adam shows efficient and stable training characteristics. After 742 epochs with initial learning rate of 0.001, the mean square error (MSE) of training set and validation set are below 90. On a data set consisting of 50 new samples, the prediction error of the yield strength and tensile strength on test set are less than 19MPa and 22MPa respectively, and the elongation prediction error is less than 3.1%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call