Abstract

This paper applies deep reinforcement learning (DRL) on the synthetic jet control of flows over an NACA (National Advisory Committee for Aeronautics) 0012 airfoil under weak turbulent condition. Based on the proximal policy optimization method, the appropriate strategy for controlling the mass rate of a synthetic jet is successfully obtained at Re=3000. The effectiveness of the DRL based active flow control (AFC) method is first demonstrated by studying the problem with constant inlet velocity, where a remarkable drag reduction of 27.0% and lift enhancement of 27.7% are achieved, accompanied by an elimination of vortex shedding. Then, the complexity of the problem is increased by changing the inlet velocity condition and reward function of the DRL algorithm. In particular, the inlet velocity conditions pulsating at two different frequencies and their combination are further applied, where the airfoil wake becomes more difficult to suppress dynamically and precisely; and the reward function additionally contains the goal of saving the energy consumed by the synergetic jets. After training, the DRL agent still has the ability to find a proper control strategy, where significant drag reduction and lift stabilization are achieved, and the agent with considerable energy saving is able to save the energy consumption of the synergetic jets for 83%. The performance of the DRL based AFC proves the strong ability of DRL to deal with fluid dynamics problems usually showing high nonlinearity and also serves to encourage further investigations on DRL based AFC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call