Abstract

Reinforcement learning (RL) based methods are an upcoming approach for the control of power systems such as electric drives. These data-driven techniques do not need an explicit plant model like most common state-of-the-art approaches. Instead, the control policy is continuously improved solely based on measurement feedback pursuing optimal control performance through learning. While the general feasibility of RL-based drive control algorithms has already been proven in simulation, this work focuses on transferring the methodology to real-world experiments. In the case of electric motor control, a strict real-time requirement, safety constraints, system delays and the limitations of embedded hardware frameworks are hurdles to overcome. Hence, several modifications to the general RL training setup are introduced in order to enable RL in real-world electric drive control problems. In particular, a rapid control prototyping toolchain is introduced allowing fast and flexible testing of arbitrary RL algorithms. This simulation-to-experiment pipeline is considered an important intermediate step towards introducing RL in embedded control for power electronic systems. To highlight the potential of RL-based drive control, extensive experimental investigations addressing the current control of a permanent magnet synchronous motor utilizing a deep deterministic policy gradient algorithm have been conducted. Despite the early state of research in this domain, promising control performance could be achieved.

Highlights

  • Optimal electric motor control is of prime interest for various applications that depend on high-performance drive systems

  • A batched version of a deep deterministic policy gradient (DDPG) algorithm [23] is extended to learn the current control policy for a permanent magnet synchronous motor (PMSM) that is fed by a B6-bridge power electronic converter

  • It should be noted that the specific implementation is adapted to dSPACE hardware and software, the general concept of edge computing-based reinforcement learning (RL) with an asynchronous training pipeline decoupled from the embedded actor is generalizable to any hardware setup

Read more

Summary

INTRODUCTION

Optimal electric motor control is of prime interest for various applications (e.g., automation and automotive engineering) that depend on high-performance drive systems. A batched version of a deep deterministic policy gradient (DDPG) algorithm [23] is extended to learn the current control policy for a permanent magnet synchronous motor (PMSM) that is fed by a B6-bridge power electronic converter Further innovations of this contribution handle the safety constraints and system delays in the case of RL motor control including extensions to the baseline DDPG algorithm [23]. An actorcritic-based RL approach is depicted in Fig. 1 the rapid control prototyping toolchain can be directly applied to value-based RL techniques such as (double) deep Q-networks [24], [25], too It allows to seamlessly plug-in and test many potentially interesting RL algorithms on a Python basis avoiding cumbersome embedded software implementations of each and every algorithm.

DRIVE SYSTEM MODEL
POWER ELECTRONIC CONVERTER
REINFORCEMENT LEARNING MODIFICATIONS
EXPERIMENTAL TEST SETUP
SOFTWARE SETUP
PRE-TRAINING OF THE MOTOR CONTROLLER
EXPERIMENTAL INVESTIGATION
TRANSIENT TESTS
STEADY-STATE TESTS
CONCLUSION AND OUTLOOK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.