Abstract

One of the fundamental questions in shared control is how to allocate control power to the human and robot effectively. Conventional arbitration policies often define a uniform singular scalar for all 6 DOFs to blend human input and robot assistance. However, this singular scalar over-dominates some dimensions of the inputs and provides insufficient assistance in other dimensions. Thus, current shared control can support simple telemanipulation tasks such as pushing, pressing, and simple positional control but is limited in tasks with more DOFs like rotational motion. A dimension-specific arbitration policy is developed to customize the control arbitration along each DOF to fill the gap. It looks at whether the robotic assistance is too timid or aggressive along each DOF and determines the arbitration magnitude according to disagreement levels of control allocation and the user's willingness to accept assistance. The user's willingness is estimated from a feedback psychology model. The method has higher similarity and ratio of agreement between the human and robot (lower over-dominance) over existing methods and, simultaneously, improves the task performance. This arbitration strategy is expected to increase the adoption of teleoperation for object manipulation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.