Abstract

Speed control of ultrasonic motors (USM) needs to be precise, fast, and robust; however, this becomes a challenging task due to the nonlinear behavior of these motors including nonlinear response, pull-out phenomenon, and speed hysteresis. However, linear controllers would be suboptimal and unstable, and nonlinear controllers would require expert knowledge, expensive online calculations, or costly model estimation. In this paper, we propose a model-free nonlinear offline controller that can significantly mitigate these challenges. Using deep reinforcement learning (DRL) algorithms, a neural network speed controller was optimized. A soft actor-critic (SAC) DRL algorithm was chosen due to its sample efficiency, fast convergence, and stable learning. To ensure controller stability, a custom control Lyapunov reward function was proposed. The steady-state USM behavior was mathematically modeled for easing controller design under simulation. The SAC agent was designed and trained first in simulation and then further trained experimentally. The experimental results support that the trained controller can successfully expand speed operation range ([0, 300] rpm), plan optimal control trajectories, and stabilize performance under varying load torque and temperature drift.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.