Abstract

While RGB-D classification task has been actively researched in recent years, most existing methods focus on the RGB-D source to target transfer task. The application of such methods cannot address the real-world scenario where the paired depth images are not hold. This paper focuses on a more flexible task that recognizes RGB test images by transferring them into the depth domain. Such a scenario retains high performance due to gaining auxiliary information but reduces the cost of pairing RGB with depth sensors at test time. Existing methods suffer from two challenges: the utilization of the additional depth features, and the domain shifting problem due to the different mechanisms between conventional RGB cameras and depth sensors. As a step towards bridging the gap, we propose a novel method called adaptive Visual-Depth Fusion Transfer (aVDFT) which can take advantage of the depth information and handle the domain distribution mismatch simultaneously. Our key novelties are: (1) a global visual-depth metric construction algorithm that can effectively align RGB and depth data structure; (2) adaptive transformed component extraction for target domain that conditioned on invariant transfer on location, scale and depth measurement. To demonstrate the effectiveness of aVDFT, we conduct comprehensive experiments on six pairs of RGB-D datasets for object recognition, scene classification and gender recognition and demonstrate state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call