Abstract

The safety of human workers has been the main concern in human-robot close collaboration. Along with rapidly developed artificial intelligence techniques, deep learning models using two-dimensional images have become feasible solutions for human motion detection. These models serve as “sensors” in the closed-loop system that involve humans and robots. Most existing methods that detect human motion using images do not consider the uncertainty from the deep learning model itself. The mappings established by deep learning models should not be taken blindly, and thus uncertainty should be a natural part of this type of sensor. In particular, model uncertainty should be explicitly quantified and incorporated into robot motion control to guarantee safety. With this motivation, to rigorously quantify the uncertainty of these “sensors”, this letter proposes a probabilistic interpretation method and automatically provides a framework to benefit from a deep model's uncertainty. Experimental data from human-robot collaboration has been collected and used to validate the proposed method. A training strategy is proposed to efficiently train surrogate models that learn to refine the prediction of the main Bayesian models. The proposed framework is also compared with Ego hands benchmark showing a 4.7% increase in mIoU.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.