Abstract

Recent research and development in artificial intelligence for robotic systems, such as self-driving cars, point to the increasing need for integrated multi-modal human signature data. In this paper, we introduce the developmental effort on a prototype multimodal human Motion and Shape Analysis System (MSAS). MSAS stores and manages physically collected multi-modal human shape and motion data as well as additional augmented modality data such as simulated LIDAR (Light Detection and Ranging) partial point clouds created through data synthetization. We highlight cross-modality metadata integration and search as well as online 3D content-based retrieval and visualization. The goal of MSAS is to harvest the synergy among various modalities by bringing them together under a standard and structured framework to support data exploitation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.