Abstract

Vision-based human joint angle estimation is essential for remote and continuous health monitoring. Most vision-based angle estimation methods use the locations of human joints extracted using optical motion cameras, depth cameras, or human pose estimation models. This study aimed to propose a reliable and straightforward approach with deep learning networks for knee and elbow flexion/extension angle estimation from the RGB video. Fifteen healthy participants performed four daily activities in this study. The experiments were conducted with four different deep learning networks, and the networks took nine subsequent frames as input while output was knee and elbow joint angles extracted from an optical motion capture system for each frame. The BiLSTM network-based joint angles estimator can estimate both joint angles with a correlation of 0.955 for knee and 0.917 for elbow joints regardless of the camera view angles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call