Abstract

Deep reinforcement learning is one of the research hotspots in artificial intelligence and has been successfully applied in many research areas; however, the low training efficiency and high demand for samples are problems that limit the application. Inspired by the rapid learning mechanisms of the hippocampus, to address these problems, a hierarchical episodic control model extending episodic memory to the domain of hierarchical reinforcement learning is proposed in this paper. The model is theoretically justified and employs a hierarchical implicit memory planning approach for counterfactual trajectory value estimation. Starting from the final step and recursively moving back along the trajectory, a hidden plan is formed within the episodic memory. Experience is aggregated both along trajectories and across trajectories, and the model is updated using a multi-headed backpropagation similar to bootstrapped neural networks. This model extends the parameterized episodic memory framework to the realm of hierarchical reinforcement learning and is theoretically analyzed to demonstrate its convergence and effectiveness. Experiments conducted in four-room games, Mujoco, and UE4-based active tracking highlight that the hierarchical episodic control model effectively enhances training efficiency. It demonstrates notable improvements in both low-dimensional and high-dimensional environments, even in cases of sparse rewards. This model can enhance the training efficiency of reinforcement learning and is suitable for application scenarios that do not rely heavily on exploration, such as unmanned aerial vehicles, robot control, computer vision applications, and so on.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call