Abstract
In a complex farm environment, the lack of an intelligent obstacle avoidance function is a major barrier to the large-scale adoption of automatic navigation technology for agricultural vehicles. This paper introduces a high-performance obstacle avoidance control method based on reinforcement learning. The process of obstacle avoidance is modeled to define the state and action spaces in the Double Deep Q-Network (DQN) architecture, and a reward function is designed to evaluate and guide the model training. A neural network model is constructed and embedded in the Double DQN architecture to decide the output action according to the input state. To train models efficiently, three encounter situations (Confronted, Cross, and Overtaking) are established in a Multi-Joint dynamics with Contact (MuJoCo) simulation environment in which validation tests verify the stability and performance of the proposed obstacle avoidance controller. In field experiments, the averages of the shortest distance, trajectory length, and time of obstacle avoidance are 2.37 m, 0.53 m, and 2.7 s, respectively, which indicate the availability of the proposed controller. The proposed Double DQN-based controller shows a significant advantage over the traditional Risk Index-based control method in terms of both space utilization and time efficiency, and its performance facilitates automatic navigation in complex farmland.
Accepted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have