In active prostheses, it is desired to achieve target poses for a given family of tasks, for example, in the task of forward reaching using a transhumeral prosthesis with coordinated joint movements. To do so, it is necessary to distinguish these target poses accurately using the input features (e.g. kinematic and sEMG) obtained from the human users. However, the input features have conventionally been selected through human observations and influenced heavily by the availability of sensors in this context, which may not always yield the most relevant information to differentiate the target poses in the given task. In order to better select from a pool of available input features, those most appropriate for a given set of target poses, a measure that correlates well with the resulting classification accuracy is required so that it can inform the interface design process. In this paper, a scatter-matrix based class separability measure is adopted to quantitatively evaluate the separability of the target poses from their corresponding input features. A human experiment was performed on ten able-bodied subjects. Subjects were asked to perform forward-reaching movements with their arms on nine target poses in a virtual reality (VR) platform and the corresponding kinematic information of their arm movement and muscle activities were recorded. The accuracy of the prosthetic interface in determining the intended target poses of the human user during forward reaching is evaluated for different combinations of input features, selected from the kinematic and sEMG sensors worn by the users. The results demonstrate that employing input features that yield a high separability measure between target poses results in a high accuracy in identifying the intended target poses in the execution of the task.
Read full abstract