Abstract

Object recognition methods based on multimodal data, color plus depth (RGB-D), usually treat each modality separately in feature extraction, which neglects implicit relations between two views and preserves noise from any view to the final representation. To address these limitations, we propose a novel canonical correlation analysis (CCA)-based multiview convolutional neural network (CNNs) framework for RGB-D object representation. The RGB and depth streams process corresponding images, respectively, then are connected by CCA module leading to a common-correlated feature space. In addition, to embed CCA into deep CNNs in a supervised manner, two different schemes are explored. One considers CCA as a regularization (CCAR) term adding to the loss function. However, solving CCA optimization directly is neither computationally efficient nor compatible with the mini-batch-based stochastic optimization. Thus, we further propose an approximation method of CCAR, using the obtained CCA projection matrices to replace the weights of feature concatenation layer at regular intervals. Such a scheme enjoys benefits of full CCAR and is efficient by amortizing its cost over many training iterations. Experiments on benchmark RGB-D object recognition datasets have shown that the proposed methods outperform most existing methods using the very same of their network architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call