A high-level perceptual model found in the human brain is essential to guide robotic control when facing perception-intensive interactive tasks. Soft robots with inherent softness may benefit from such mechanisms when interacting with their surroundings. Here, we propose an expected-actual perception-action loop and demonstrate the model on a sensorized soft continuum robot. By sensing and matching expected and actual shape (1.4% estimation error on average), at each perception loop, our robot system rapidly (detection within 0.4 s) and robustly detects contact and distinguishes deformation sources, whether external and internal actions are applied separately or simultaneously. We also show that our soft arm can accurately perceive contact direction in both static and dynamic configurations (error below 10°), even in interactive environments without vision. The potential of our method are demonstrated in two experimental scenarios: learning to autonomously navigate by touching the walls, and teaching and repeating desired configurations of position and force through interaction with human operators.
Read full abstract