Abstract
Large volumes of RGB video data are recorded and processed every day. One of the embedded data modality within these videos is the information about human motions. Up to now, this information has been almost unfeasible to extract, and thus human-motion understanding research has been mainly limited to 3D skeleton data captured by dedicated hardware only. However, with recent advances in computer vision, it is possible to estimate 2D skeleton sequences from ordinary videos quite accurately. Such 2D skeleton data possess an excellent potential for future motion understanding applications. In this paper, we adopt a state-of-the-art bidirectional LSTM network to analyze the accuracy gap in the expressive power of 2D and 3D skeleton data recorded simultaneously on a high number of 20k human actions. We further examine how the missing depth information and fluctuations in 2D skeleton sizes influence the recognition rate. We also demonstrate the suitability of 2D skeleton data for general daily activity recognition by reporting baselines on the PKU-MMD dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.