Object handover is a fundamental task for collaborative robots, particularly service robots. In in-home assistance scenarios, individuals often face constraints due to their posture and declining physical functions, necessitating high demands on robots for flexible real-time control and intuitive interactions. During robot-to-human handovers, individuals are limited to making perceptual judgements based on the appearance of the object and the consistent behaviour of the robot. This hinders their comprehensive perception and may lead to unexpected dangerous behaviour. Various handover trajectories pose challenges to predictive robot control and motion coordination. Many studies have shown that force guidance can provide adequate information to the receivers. However, force modulation with intention judgements based on velocity, acceleration, or jerk may impede the intended motion and require additional effort. In this paper, starting from a human-to-human handover study, an anisotropic variable force-guided robot-to-human handover control method is proposed to overcome the cognition-reality gap. The retraction motion was decoupled based on a fitted motion plane and a task-related linear trajectory, which served as a reference for overshoot suppression and impedance force modulation. The experimental results and user surveys show that the anisotropic variable impedance force suppresses overshooting without impeding the intended motions, giving the receiver sufficient time for behavioural adjustments and assisting them in completing a safe and efficient handover in a preferred manner.
Read full abstract