Abstract

In this work, we propose a technique to automatically detect and segment hands on first-person images of patientsin upper limb rehabilitation exercises. The aim is to automate the assessment of the patient's recovery processthrough rehabilitation exercises. The proposed technique includes the following steps: 1) setting up a wearablecamera system and collecting upper extremity rehabilitation exercise data. The data is filtered, selected andannotated with the left and right hand as well as segmented the image area of the patient's hand. The datasetconsists of 3700 images with the name RehabHand. This dataset is used to train hand detection and segmentationmodels on first-person images. 2) conducted a survey of automatic hand detection and segmentation models usingMask-RCNN network architecture with different backbones. From the experimental architectures, the Mask -RCNN architecture with the Res2Net backbone was selected for all three tasks: hand detection; left - right handidentification; and hand segmentation. The proposed model has achieved the highest performance in the tests. Toovercome the limitation on the amount of training data, we propose to use the transfer learning method alongwith data enhancement techniques to improve the accuracy of the model. The results of the detection of objects onthe test dataset for the left hand is AP = 92.3%, the right hand AP = 91.1%. The segmentation result on the test dataset forleft hand is AP = 88.8%, right hand being AP = 87%. These results suggest that it is possible to automatically quantifythe patient's ability to use their hands during upper extremity rehabilitation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call