Abstract

RGB-D sensor-based scene classification has recently received attention because of its potential use in human–computer interaction applications. However, when feature representation schemes are adopted, the dimensionality of concatenated scene features extracted from the RGB and depth image pair is very high and faces the curse of dimensionality. Therefore, we present a new subspace selection scheme called local structure preserving discriminative projections (LSPDP). LSPDP simultaneously considers two issues: (1) achieving a balance between the local patch structure and global within-class structure and (2) maximizing between-class distances. Extensive experimentation on the NYU Depth V1 dataset with feature sets computed using three popular schemes (locality-constrained linear coding (LLC), spatial pyramid matching using sparse coding (Sc-SPM), and efficient match kernels (EMKs)) demonstrates the robustness and effectiveness of the new method for RGB-D sensor-based scene classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call