Abstract

Grasping an object is usually only an intermediate goal for a robotic manipulator. To finish the task, the robot needs to know where the object is in its hand and what action to execute. This paper presents a general statistical framework to address these problems. Given a novel object, the robot learns a statistical model of grasp state conditioned on sensor values. The robot also builds a statistical model of the requirements for a successful execution of the task in terms of uncertainty in the state of the grasp. Both of these models are constructed by offline experiments. The online process then grasps objects and chooses actions to maximize likelihood of success. This paper describes the framework in detail, and demonstrates its effectiveness experimentally in placing, dropping, and insertion tasks. To construct statistical models, the robot performed over 8,000 grasp trials, and over 1,000 trials each of placing, dropping, and insertion.

Highlights

  • Knowledge of the grasp state is often critical to any subsequent manipulation task

  • The statistical framework proposed in this paper is best suited to model the execution of tasks that require grasping an object prior to execution, i.e., post-grasp manipulation tasks

  • Estimate the sate of the grasp with in-hand sensors, and second, model the accuracy requirements that the particular task imposes on our state estimation. This separation yields the benefit that we can use the same model of state estimation for different tasks, and the same model of task requirements for different manipulators

Read more

Summary

Introduction

Knowledge of the grasp state is often critical to any subsequent manipulation task. Intuitively, harder tasks demand a more accurate estimation of the state of a grasp than simpler ones. Estimate the sate of the grasp with in-hand sensors, and second, model the accuracy requirements that the particular task imposes on our state estimation. This separation yields the benefit that we can use the same model of state estimation for different tasks, and the same model of task requirements for different manipulators Using this framework, each sensor reading generates a probability distribution in task action space, enabling us to find the optimal action, but to understand just how likely that action is to succeed.

Related Work
Statistical Framework
Learning Sensing Capabilities
Prior Distribution
Posterior Distribution
Learning Task Requirements
Matching Task Requirements with Sensing Capabilities
Experimental Validation
Findings
Conclusion
Discussion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.