Abstract

Recent advancements in deep reinforcement learning for real control tasks have received interest from many researchers and field engineers in a variety of industrial areas. However, in most cases, optimal policies obtained by deep reinforcement learning are difficult to implement on cost-effective and lightweight platforms such as mobile devices. This can be attributed to their computational complexity and excessive memory usage. For this reason, this article proposes an off-policy deep reinforcement learning algorithm called the sparse variational deterministic policy gradient (SVDPG). SVDPG provides highly efficient policy network compression under the standard reinforcement learning framework. The proposed SVDPG integrates Bayesian pruning, which is known as a state-of-the-art neural network compression technique, with the policy update in an actor–critic architecture for reinforcement learning. It is demonstrated that SVDPG achieves a high compression rate of policy networks for continuous control benchmark tasks while preserving a competitive performance. The superiority of SVDPG in low-computing power devices is proven by comparing the level of compression in terms of the memory requirements and computation time on a commercial microcontroller unit. Finally, it is confirmed that the proposed SVDPG is also reliable in real-world scenarios since it can be applied to the swing-up control of an inverted pendulum system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.