Abstract

The engine performance search control plays a key role in the engine operation. How to find a suitable algorithm to make the control process fast and stable without exceeding various engine constraints is very important. With the continuous development of deep reinforcement learning in recent years, not only many deep reinforcement learning algorithms have emerged, but also deep reinforcement learning has made rapid development in the field of control. Considering the strong perception and decision-making ability of the deep reinforcement learning algorithm, and in order to adapt to the strong nonlinearity of the engine model and avoid falling into the local optimal value, this paper uses the SAC (Soft Actor Critical, SAC) algorithm to optimize the transition state performance. By introducing the strategy information entropy into the objective function, the search stability of the algorithm and the final optimization effect are significantly improved. Finally, the SAC algorithm is compared with the DDPG algorithm to optimize the transition state performance, and the results show that the SAC algorithm has a better optimization effect on the transition state performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.