Abstract

Recent research and development in artificial intelligence for robotic systems, such as self-driving cars, point to the increasing need for integrated multi-modal human signature data. In this paper, we introduce the developmental effort on a prototype multimodal human Motion and Shape Analysis System (MSAS). MSAS stores and manages physically collected multi-modal human shape and motion data as well as additional augmented modality data such as simulated LIDAR (Light Detection and Ranging) partial point clouds created through data synthetization. We highlight cross-modality metadata integration and search as well as online 3D content-based retrieval and visualization. The goal of MSAS is to harvest the synergy among various modalities by bringing them together under a standard and structured framework to support data exploitation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call