Abstract
Multi-step tasks in cluttered environment are extremely tough for robotic manipulation. Such a task interweaves high-level reasoning that estimates the stage of the current state in achieving the overall goal and low-level reasoning that determines which action will make progress. We propose a PRRM framework, which is modular and loop-closed, realizing the learning of multi-step manipulation tasks through self-supervised learning. The framework involves an object detection module used to provide guidance for action selection. We introduce a vision-based Action Projection Network (AcProNet) that maps visual observation to the execution values of action candidates, trained by a deep Q learning method. We define a reward function that the reward weights of different actions can be adjusted according to the goals of tasks. We further introduce a policy that determines the ultimate action from action candidates related to the results of detection module. We demonstrate the effectiveness of our framework by completing simulated trials of several multi-step tasks. Experimental results show that our framework can learn complex behaviors in a cluttered environment, and achieve a good performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.