Abstract

When 3D sensors such as Light Detection and Ranging (LIDAR) are employed in targeting and recognition of human actions from both ground and aerial platforms, the corresponding point clouds of body shape often comprise low-resolution, disjoint, and irregular patches of points resulted from self-occlusions and viewing angle variations. Many existing 3D shape descriptors designed for shape query and retrieval cannot work effectively with these degenerated point clouds because of their dependency on dense and smooth full-body scans. In this paper, a new degeneracy-tolerable, multi-scale 3D shape descriptor based on the discrete orthogonal Tchebichef moment is proposed as an alternative for single-view partial point cloud representation and characterization. To evaluate the effectiveness of our descriptor, named Tchebichef moment shape descriptor (TMSD), in human shape retrieval, we built a multi-subject pose shape baseline to produce simulated LIDAR captures at different viewing angles and conducted experiments of nearest neighbor search and point cloud reconstruction. The query results show that TMSD performs significantly better than the Fourier descriptor and is slightly better than the wavelet descriptor but more flexible to construct. In addition, we proposed a voxelization scheme that can achieve translation, scale, and resolution invariance, which may be less of a concern in the traditional full-body shape analysis but are crucial requirements for meaningful partial point cloud retrievals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call