Abstract

Most East Asian rehabilitation centers offer chopsticks manipulation tests (CMT). In addition to impaired hand function, approximately two-thirds of stroke survivors have visual impairment related to eye movement. This article investigates the significance of combining finger joint angle estimation and a visual attention measurement in CMT. We present a multiscopic framework that consists of microscopic, mesoscopic, and macroscopic levels. We develop a feature extraction technique to extract the kinematic finger model at the microscopic level. At the mesoscopic level, we propose an active perception ability to detect the position and geometry of the finger on the chopsticks. The proposed framework estimates the proximal interphalangeal (PIP) joint angle on the index finger during CMT using fully connected cascade neural networks (FCC-NN). At the macroscopic level, we implement a cognitive ability by measuring visual attention during CMT. We further evaluate the proposed framework with a conventional test that counts the number of peanuts (NP) which are moved from one bowl to another using chopsticks within a particular time frame. For the evaluation, we introduce three parameters, namely joint angle estimation movement (JAEM), chopstick attention movement (CAM), and chopstick tip movement (CTM), by detecting the local minima and maxima of the signal. According to the experiment results, the velocity of these three parameters could indicate improvement in hand and eye function during CMT. We expect this study to benefit therapists and researchers by providing valuable information that is not accessible in the clinic. Code and datasets are available online at https://github.com/anom-tmu/cmt-attention.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call