Abstract
In stroke rehabilitation systems and applications, reliability, accuracy, and occlusion should be taken into consideration. Unfortunately, most existing approaches focus primarily on the first two issues. However, during the stroke rehabilitation process, occlusion leads to incorrect judgements even for medical staff. In order to tackle these three important issues simultaneously, we propose a heterogeneous sensor fusion framework composed of an RGB-D camera and a wearable device to consider occlusion and provide robust joint locations for rehabilitation. To fuse multiple sensor measurements when compensating for occlusion, we apply heterogeneous sensor simultaneous localization, tracking, and modeling to estimate the locations of joints and sensors and construct an upper extremity model for occlusion situation. Virtual measurements based on this model are used to estimate the joint's location during occlusion, and a virtual relative orientation technique is applied to relax system limitations regarding orientation. Experimental results using the proposed approach with synthetic data and data collected from ten subjects show a 4.6 cm error on average and about 15 cm error on average during occlusion. This constitutes a more robust approach for stroke patients which takes into account these three important issues.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.