Abstract

Feedforward neural network (FNN) models with strong learning ability and prediction accuracy are crucial for optimization. This paper investigates the effects of the number of training samples and the hidden layers on the accuracy of the FNN model. Meanwhile, under the premise of a high space-fillingness degree, a sample expansion strategy based on the max–min distance criterion is proposed, which ensures that the expanded sample set completely contains the pre-expanded. The strategy can eliminate the interference of sample differences. Furthermore, the multi-objective optimization on the train nose shape is accomplished by minimizing the aerodynamic lift force of the tail car (LT), as well as the aerodynamic drag force of the head (DH) and tail car (DT) using the FNN model. The results indicate that the number of training samples has a greater impact on the prediction error of the FNN model than the number of hidden layers does. Prediction errors decrease as the number of training samples increases and then stabilise, the most accurate one is chosen for nose shape optimization. The DH, DT, and LT all have prediction errors of less than 2%. Compared with the original high-speed train, the DH, DT, and LT of the optimal model are reduced by 5.24%, 3.74%, and 2.61%, respectively. Meanwhile, the correlation analysis reveals that the height of the cab window and the horizontal profile have a significant impact on the aerodynamic characteristics of the high-speed train.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call