Abstract
Electric motors are used in many applications, and their efficiency is strongly dependent on their control. Among others, linear feedback approaches or model predictive control methods are well known in the scientific literature and industrial practice. A novel approach is to use reinforcement learning (RL) to have an agent learn electric drive control from scratch merely by interacting with a suitable control environment. RL achieved remarkable results with superhuman performance in many games (e.g., Atari classics or Go) and also becomes more popular in control tasks, such as cart-pole or swinging pendulum benchmarks. In this work, the open-source Python package gym-electric-motor (GEM) is developed for ease of training of RL-agents for electric motor control. Furthermore, this package can be used to compare the trained agents with other state-of-the-art control approaches. It is based on the OpenAI Gym framework that provides a widely used interface for the evaluation of RL-agents. The package covers different dc and three-phase motor variants, as well as different power electronic converters and mechanical load models. Due to the modular setup of the proposed toolbox, additional motor, load, and power electronic devices can be easily extended in the future. Furthermore, different secondary effects, such as converter interlocking time or noise, are considered. An intelligent controller example based on the deep deterministic policy gradient algorithm that controls a series dc motor is presented and compared to a cascaded proportional-integral controller as a baseline for future research. Here, safety requirements are particularly highlighted as an important constraint for data-driven control algorithms applied to electric energy systems. Fellow researchers are encouraged to use the GEM framework in their RL investigations or contribute to the functional scope (e.g., further motor types) of the package.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.