Abstract

Reinforcement learning (RL) has shown outstanding performance in handling complex tasks in recent years. Eligibility trace (ET), a fundamental and important mechanism in reinforcement learning, records critical states with attenuation and guides the update of policy, which plays a crucial role in accelerating the convergence of RL training. However, ET implementation on conventional digital computing hardware is energy hungry and restricted by the memory wall due to massive calculation of exponential decay functions. Here, in-memory realization of ET for energy-efficient reinforcement learning with outstanding performance in discrete- and continuous-state RL tasks is demonstrated. For the first time, the inherent conductance drift of phase change memory is exploited as physical decay function to realize in-memory eligibility trace, demonstrating excellent performance during RL training in various tasks. The spontaneous in-memory decay computing and storage of policy in the same phase change memory give rise to significantly enhanced energy efficiency compared with traditional graphics processing unit platforms. This work therefore provides a holistic energy and hardware efficient method for both training and inference of reinforcement learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.