Abstract
Three modifications to the Boxes-ASE/ACE reinforcement learning improves implementation efficiency and performance. A state history queue (SHQ) eliminates computations for temporally insignificant states. A dynamic link table only allocates control memory to states the system traverses. CMAC state association uses previous learning to decrease training time. Simulations show a 4-fold improvement in learning. The SHQ in a hardware implementation of the pole-cart balancer reduces computation time 11-fold.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Systems, Man, and Cybernetics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.