Abstract
The objective of this research effort is to integrate therapy instruction with child-robot play interaction in order to better assess upper-arm rehabilitation. Using computer vision techniques such as Motion History Imaging MHI, edge detection, and Random Sample Consensus RANSAC, movements can be quantified through robot observation. In addition, incorporating prior knowledge regarding exercise data, physical therapeutic metrics, and novel approaches, a mapping to therapist instructions can be created allowing robotic feedback and intelligent interaction. The results are compared with ground truth data retrieved via the Trimble 5606 Robotic Total Station and visual experts for the purpose of assessing the efficiency of this approach. We performed a series of upper-arm exercises with two male subjects, which were captured via a simple webcam. The specific exercises involved adduction and abduction and lateral and medial movements. The analysis shows that our algorithmic results compare closely to the results obtain from the ground truth data, with an average algorithmic error is less than 9% for the range of motion and less than 8% for the peak angular velocity of each subject.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.