Abstract
When using deep reinforcement learning algorithms for path planning of a multi-DOF fruit-picking manipulator in unstructured environments, it is much too difficult for the multi-DOF manipulator to obtain high-value samples at the beginning of training, resulting in low learning and convergence efficiency. Aiming to reduce the inefficient exploration in unstructured environments, a reinforcement learning strategy combining expert experience guidance was first proposed in this paper. The ratios of expert experience to newly generated samples and the frequency of return visits to expert experience were studied by the simulation experiments. Some conclusions were that the ratio of expert experience, which declined from 0.45 to 0.35, was more effective in improving learning efficiency of the model than the constant ratio. Compared to an expert experience ratio of 0.35, the success rate increased by 1.26%, and compared to an expert experience ratio of 0.45, the success rate increased by 20.37%. The highest success rate was achieved when the frequency of return visits was 15 in 50 episodes, an improvement of 31.77%. The results showed that the proposed method can effectively improve the model performance and enhance the learning efficiency at the beginning of training in unstructured environments. This training method has implications for the training process of reinforcement learning in other domains.
Highlights
Automatic fruit-picking systems based on a multi-DOF manipulator have become a major direction in fruit harvesting in order to increase efficiency and reduce production costs [1]
Chun proposed a deep reinforcement learning algorithm framework that combined the advantages of convolutional neural network (CNN) and deep deterministic policy gradient (DDPG) algorithms to solve how to use delivery task information and automated guided vehicles (AGVs) travel time in the problem of dynamic scheduling of AGV [17]
Deep Reinforcement Learning Strategies with Expert Experience In the early stages of fruit picking, the complexity and disorder of the target locations and the fact that the network parameters are randomly generated at the initial stage make the model inefficient and difficult for the network to converge during the training process
Summary
Automatic fruit-picking systems based on a multi-DOF (degree of freedom) manipulator have become a major direction in fruit harvesting in order to increase efficiency and reduce production costs [1]. Zheng et al [25] proposed a deep deterministic policy gradient algorithm based on a stepwise migration strategy, which introduced spatial constraints for stepwise training in an obstacle-free environment, speeding up the network convergence, after which the obtained prior knowledge was used to guide the path planning task of a multi-DOF manipulator in a complex unstructured environment. A deep reinforcement learning strategy combined with expert experience was proposed to improve [28] the learning efficiency of the algorithm at the beginning of training period and reduce the blind exploration of the multi-DOF manipulator. Deep Reinforcement Learning Strategies with Expert Experience In the early stages of fruit picking, the complexity and disorder of the target locations and the fact that the network parameters are randomly generated at the initial stage make the model inefficient and difficult for the network to converge during the training process. FFigiguurere33. .PPicickkininggsscceennee. .OOnnththeeleleffttssidideeisisaannaappppleletrtereeemmooddeel,l,ininwwhhicichhrereddspsphheereressininddicicaateterirpipee aapppplelessananddgrgereenenspshpehreerseisndinidcaicteatuenurinpreipaeppaplepsl;eosn; othnetrhieghritgshidtesiids ea ims ualmti-uDlOti-FDpOicFkpinigckminagnimpuanlaitpourfilxaetodrofnixteodpoonf taomp oofbialempolabtifleorpmla.tform
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.