Abstract

Classification of indoor environments is a challenging problem. The availability of low-cost depth sensors has opened up a new research area of using depth information in addition to color image (RGB) data for scene understanding. Transfer learning of deep convolutional networks with pairs of RGB and depth (RGB-D) images has to deal with integrating these two modalities. Single-channel depth images are often converted to three-channel images by extracting horizontal disparity, height above ground, and the angle of the pixel’s local surface normal (HHA) to apply transfer learning using networks trained on the Places365 dataset. The high computational cost of HHA encoding can be a major disadvantage for the real-time prediction of scenes, although this may be less important during the training phase. We propose a new, computationally efficient encoding method that can be integrated with any convolutional neural network. We show that our encoding approach performs equally well or better in a multimodal transfer learning setup for scene classification. Our encoding is implemented in a customized and pretrained VGG16 Net. We address the class imbalance problem seen in the image dataset using a method based on the synthetic minority oversampling technique (SMOTE) at the feature level. With appropriate image augmentation and fine-tuning, our network achieves scene classification accuracy comparable to that of other state-of-the-art architectures.

Highlights

  • RGBD convolutional neural networks (CNN) with convolution-based encoding (CBE): RGBD CNN with added CBE layer was used in this setup

  • Experiments were performed with and without data augmentation using HHA encoding as well as convolution-based encoding When the RGBD CNN was used without data augmentation and depth images converted with HHA encoding, the of scene classification accuracy obtained was 54.7%

  • synthetic minority oversampling technique (SMOTE) oversampling was applied on features extracted at the output of the first dense layer of the trained RGBD CNN

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Classical scene categorization systems extract image features and use them as input to a classifier including support vector machines (SVM), random forest, etc., for classification. The success of these systems depends on the right choice of features relevant to the task. With the availability of large datasets with millions of images, convolutional networks are able to learn features relevant to the task at hand with high discriminative capability. The class imbalance problem seen in the SUN RGB-D image dataset is addressed by applying the SMOTE technique to the features extracted after training a deep convolutional network and using these features to retrain an ablated network.

Scene Classification Using Features Extracted
Scene Classification Using Neural Networks
Scene Recognition Using RGB-D Images
Class Balancing
Benchmark Dataset
Architecture of the Proposed Method
VGG Convolutional Network
Data Augmentation Module
Depth Encoding Module
SMOTE Oversampling and Fine-Tuning of Dense Layers
Experimental Setup
Dataset for Training and Validation
Ablation Study on VGG16-PlacesNet Configurations for Transfer Learning
Implementation of the Depth Encoding Module
Experimental Results and Analysis
Data Augmentation
Convolution-Based Encoding
Experimental Results with Oversampling
Comparison with Existing Methods
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call