Abstract

Assistive robot systems have been developed to help people accomplish daily manipulation tasks especially for those with disabilities, where scene understanding plays a crucial role in enabling robots to interpret the surroundings and behave accordingly. Most of the current systems approach scene understanding without considering the functional dependencies between objects. However, it is only valuable to interact with some objects when their function-relevant counterparts are considered. In this paper, we augment an assistive robotic arm system with an end-to-end semantic relationship reasoning model. It incorporates functional relationships between pairs of objects for semantic scene understanding. To ensure good generalization to unseen objects and relationships, the model works in a category-agnostic manner. We evaluate our design and three baseline methods on a self-collected benchmark with two levels of difficulty. To further demonstrate the effectiveness, the model is integrated with a symbolic planner for goal-oriented, multi-step manipulation task on a real-world assistive robotic arm platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call