Abstract

An optimization method combining deep reinforcement learning (DRL) and computational fluid dynamics (CFD) was developed, and its effectiveness and limitations are investigated. As a target to apply the method, an optimization problem to find geometry parameters of the wavy airfoil that maximizes the lift–drag ratio is set. Twin delayed deep deterministic policy gradient (TD3) is adopted as the DRL algorithm. The CFD code based on the standard scheme for viscous incompressible flows is used for the calculation of the lift–drag ratio. The neural networks learn a policy for improving the lift–drag ratio by changing the geometry parameters of the airfoil at the fixed angle of attack (AoA) of 0° and successfully achieve a maximum lift–drag ratio: the obtained final shape is almost the same as that acquired by the gradient method. However, when the global optimal solution is near the penalized region, the DRL has the disadvantage of tending to fall into local optima. The effects of several parameters of DRL, such as the reward function and the number of sample points in random exploration, are investigated. Moreover, by using a trained neural network at an AoA of 0°, a converged solution can be obtained more quickly for an AoA different from those of the trained case if an appropriate reward function is set. This indicates the possibility of transfer learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call