Abstract
Ergonomics is a relatively important part of user experience in multimedia art design. The study aims to improve human–computer interaction efficiency using a data-driven neural network model combined with video data and wearable devices to achieve high-precision human motion recognition. Firstly, the human motion skeleton information is extracted based on video information through the OpenPose framework, and the motion characteristics of each joint of the human body are calculated. Then, use the inertia data of the wearable bracelet to calculate the exercise intensity. At last, the two kinds of information are sent to recurrent neural networks (RNN) together to achieve high-precision human motion recognition. The innovative contribution of the article is to establish a multimodal fusion model for human activity recognition. The experimental results show that the recognition precision of the proposed method reaches 97.85%, much higher than the backpropagation neural network (BPNN) and K-nearest neighbor (KNN), whose precision is 94.35% and 90.12%, respectively. The method’s superior performance convinces us that the model can provide strong technical support for the interaction design of multimedia art.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.