Abstract
We propose GA-Nav, a novel group-wise attention mechanism to identify safe and navigable regions in unstructured environments from RGB images. Our group-wise attention method extracts multi-scale features from each type of terrain independently and classifies terrains based on their navigability levels using coarse-grained semantic segmentation. Our novel loss can be embedded within any backbone network to explicitly focus on the different groups’ features, at a low spatial resolution. Our design leads to efficient inference while maintaining a high level of accuracy compared to existing SOTA methods. Our extensive evaluations on the RUGD and RELLIS-3D datasets shows that GA-Nav achieves the state-of-the-art performance on RUGD and RELLIS-3D datasets. We interface GA-Nav with a deep reinforcement learning-based navigation algorithm and highlight its benefits in terms of navigation in real-world unstructured terrains. We integrate our GA-Nav-based navigation algorithm with ClearPath Jackal and Husky robots, and observe an improvement in terms of navigation success rate and better trajectory selections.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have