Abstract

RGB-D image-based scene recognition has achieved significant performance improvement with the development of deep learning methods. While convolutional neural networks can learn high-semantic level features for object recognition, these methods still have limitations for RGB-D scene classification. One limitation is that how to learn better multi-modal features for the RGB-D scene recognition is still an open problem. Another limitation is that the scene images are usually not object-centric and with great spatial variability. Thus, vanilla full-image CNN features maybe not optimal for scene recognition. Considering these problems, in this paper, we propose a compact and effective framework for RGB-D scene recognition. Specifically, we make the following contributions: 1) A novel RGB-D scene recognition framework is proposed to explicitly learn the global modal-specific and local modal-consistent features simultaneously. Different from existing approaches, local CNN features are considered for the learning of modal-consistent representations; 2) key Feature Selection (KFS) module is designed, which can adaptively select important local features from the high-semantic level CNN feature maps. It is more efficient and effective than object detection and dense patch-sampling based methods, and; 3) a triplet correlation loss and a spatial-attention similarity loss are proposed for the training of KFS module. Under the supervision of the proposed loss functions, the network can learn import local features of two modalities with no need for extra annotations. Finally, by concatenating the global and local features together, the proposed framework can achieve new state-of-the-art scene recognition performance on the SUN RGB-D dataset and NYU Depth version 2 (NYUD v2) dataset.

Highlights

  • With the advent of deep learning methods especially the convolutional neural networks (CNN), image classification performance has been improved dramatically on the large-scale object-centric image recognition dataset: ImageNet [1]

  • Zhou et al [6] released a large scale scene image classification dataset named Places, and showed the effectiveness of pre-training CNN parameters on it compared to the ImageNet dataset

  • To handle the aforementioned issues, in this work, we propose an end-to-end multi-modal feature learning framework, which adaptively selects important local region features and fuses the local and global features together for RGB-D scene recognition

Read more

Summary

Introduction

With the advent of deep learning methods especially the convolutional neural networks (CNN), image classification performance has been improved dramatically on the large-scale object-centric image recognition dataset: ImageNet [1]. The work of [7], [8] and [9] were proposed to leverage the local CNN features for scene classification These methods firstly extracted features of different scales and locations densely, and encoded them with the fisher vector (FV) [10]. These works can improve the performance with the powerful local features, there exist two obvious disadvantages. One is that merely exploiting the local features neglects the global layout of the scene.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call