Abstract

Humans perceive and describe their surroundings with qualitative statements (e.g., hand is in contact with a bottle.), rather than quantitative values (e.g., 6-D poses of Alice’s hand and a bottle). Qualitative spatial representation (QSR) is a framework that represents the spatial information of objects in a qualitative manner. Region connection calculus (RCC), qualitative trajectory calculus (QTC), and qualitative distance calculus (QDC) are some popular QSR calculi. With the recent development of computer vision, it is important to compute QSR calculi from the visual inputs (e.g., RGB-D images). In fact, many QSR application domains (e.g., human activity recognition (HAR) in robotics) involve visual inputs. We propose a qualitative spatial representation network (QSRNet) that computes the three QSR calculi (i.e., RCC, QTC, and QDC) from the RGB-D images. QSRNet has the following novel contributions. First, QSRNet models the dependencies among the three QSR calculi. We introduce the dependencies as kinematics for QSR because they are analogous to the kinematics in classical mechanics. Second, QSRNet applies the 3-D point cloud instance segmentation to compute the QSR calculi. The experimental results show that QSRNet improves the accuracy in comparison to the other state-of-the-art techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.