Many animals exhibit agile mobility in obstructed environments due to their ability to tune their bodies to negotiate and manipulate obstacles and apertures. Most mobile robots are rigid structures and avoid obstacles where possible. In this work, we introduce a new framework named Haptic And Visual Environment Navigation (HAVEN) Architecture to combine vision and proprioception for a deformable mobile robot to be more agile in obstructed environments. The algorithms enable the robot to be autonomously (a)predictive by analysing visual feedback from the environment and preparing its body accordingly, (b)reactive by responding to proprioceptive feedback, and (c)active by manipulating obstacles and gap sizes using its deformable body. The robot was tested approaching differently sized apertures in obstructed environments ranging from greater than its shape to smaller than its narrowest possible size. The experiments involved multiple obstacles with different physical properties. The results show higher navigation success rates and an average 32% navigation time reduction when the robot actively manipulates obstacles using its shape-changing body.
Read full abstract