Abstract

The subject of this article is the numerical optimization techniques used in training neural networks that serve as predicate components in certain modern eddy viscosity models. Qualitative solution to the problem of training (minimization of the functional of neural network offsets) often requires significant computational costs, which necessitates to increase the speed of such training based on combination of numerical methods and parallelization of calculations. The Marquardt method draws particular interest, as it contains  the parameter that allows speeding up the solution by switching the method from the descent away from the solution to the Newton’s method of approximate solution. The article offers modification of the Marquardt method, which uses the limited series of random samples for improving the current point and calculate the parameter of the method. The author demonstrate descent characteristics of the method in numerical experiments, both on the test functions of Himmelblau and Rosenbrock, as well as the actual task of training the neural network predictor applies in modeling of the turbulent flows. The use of this method may significantly speed up the training of neural network predictor in corrective models of eddy viscosity. The method is less time-consuming in comparison with random search, namely in terms of a small amount of compute kernels; however, it provides solution that is close to the result of random search and is better than the original Marquardt method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call