Abstract

This paper presents a comparison between a multiple red green blue-depth (RGB-D) vision system, an intensity variation-based polymer optical fiber (POF) sensor, and inertial measurement units (IMUs) for human joint angle estimation and movement analysis. This systematic comparison aims to study the trade-off between the non-invasive feature of a vision system and its accuracy with wearable technologies for joint angle measurements. The multiple RGB-D vision system is composed of two camera-based sensors, in which a sensor fusion algorithm is employed to mitigate occlusion and out-range issues commonly reported in such systems. Two wearable sensors were employed for the comparison of angle estimation: (i) a POF curvature sensor to measure 1-DOF angle; and (ii) a commercially available IMUs MTw Awinda from Xsens. A protocol to evaluate elbow joints of 11 healthy volunteers was implemented and the comparison of the three systems was presented using the correlation coefficient and the root mean squared error (RMSE). Moreover, a novel approach for angle correction of markerless camera-based systems is proposed here to minimize the errors on the sagittal plane. Results show a correlation coefficient up to 0.99 between the sensors with a RMSE of 4.90 ∘ , which represents a two-fold reduction when compared with the uncompensated results (10.42 ∘ ). Thus, the RGB-D system with the proposed technique is an attractive non-invasive and low-cost option for joint angle assessment. The authors envisage the proposed vision system as a valuable tool for the development of game-based interactive environments and for assistance of healthcare professionals on the generation of functional parameters during motion analysis in physical training and therapy.

Highlights

  • There is a clear and growing interest in developing technological-based tools that systematically analyze human movement

  • The proposed technique is based on anthropometric measurements of each subject, where the errors of the camera-based system compared with the polymer optical fiber (POF) curvature sensor and inertial measurement units (IMUs) system are corrected through a correlation between the measured errors and the length of the arm of each subject (d1, d2 and d3 of Equation (1))

  • The proposed markerless system uses two red green blue-depth (RGB-D) cameras to reduce errors and inaccuracies related to self-occlusion issues

Read more

Summary

Introduction

There is a clear and growing interest in developing technological-based tools that systematically analyze human movement. There are many advantages to implement automated systems to detect human motion for applications associated with children in a healthcare context or to assess mobility impairment of ill and elderly people [1]. Automated quantification of body motion to support specialists in the decision-making process such as stability, duration, coordination, and posture control is the desired result for those technological-based approaches [2,3]. Despite recent advances in this area, automated quantification of the human movement for children with sensory processing and cognitive impairments, and adults with mobility disabilities presents multiple challenges due to factors such as accessibility barriers, attached to the body requirements, and the high cost of the system. Camera-based markerless systems can be used in scenarios when the user does not admit a wearable device to capture data [4]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call