Abstract

We introduce single-step proximal policy optimization, a deep reinforcement learning algorithm for situations where neural network optimization does not depend on state. Optimization is examined with open-loop control problems of laminar and turbulent flows (Re up to a few ${10}^{4}$). For turbulent flow past a square cylinder, the approach reduces drag by 30% by finding the best placement of a small control cylinder, in agreement with existing experimental data. For turbulent flow past a fluidic pinball, drag is reduced by 60% by using boat tailing actuation made up of a slowly rotating front cylinder and two oppositely rotating downstream cylinders, matching existing machine learning results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call