This paper introduces a novel approach to robotic assistance in bottle opening using the dual-arm robot TIAGo++. The solution enhances accessibility by addressing the needs of individuals with injuries or disabilities who may require help with common manipulation tasks. The aim of this paper is to propose a method involving vision, manipulation, and learning techniques to effectively address the task of bottle opening. The process begins with the acquisition of bottle and cap positions using an RGB-D camera and computer vision. Subsequently, the robot picks the bottle with one gripper and grips the cap with the other, each by planning safe trajectories. Then, the opening procedure is executed via a position and force control scheme that ensures both grippers follow the unscrewing path defined by the cap thread. Within the control loop, force sensor information is employed to control the vertical axis movements, while gripper rotation control is achieved through a Deep Reinforcement Learning (DRL) algorithm trained to determine the optimal angle increments for rotation. The results demonstrate the successful training of the learning agent. The experiments confirm the effectiveness of the proposed method in bottle opening with the TIAGo++ robot, showcasing the practical viability of the approach.
Read full abstract