Reinforcement Learning (RL) requires many interactions with the environment to converge to an optimal strategy, which makes it unfeasible to apply to wheel loaders and the bucket filling problem without using simulators. However, it is difficult to model the pile dynamics in the simulator because of unknown parameters, which results in poor transferability from the simulation to the real environment. Instead, this paper uses world models, serving as a fast surrogate simulator, creating a dream environment where a reinforcement learning (RL) agent explores and optimizes its bucket-filling behavior. The trained agent is then deployed on a full-size wheel loader without modifications, demonstrating its ability to outperform the previous benchmark controller, which was synthesized using imitation learning. Additionally, the same performance was observed as that of a controller pre-trained with imitation learning and optimized on the test pile using RL.
Read full abstract