Abstract
A-mode ultrasound, like other biological signals, has a certain deviation in the signals obtained by performing the same gesture at different arm positions. This problem hinders the clinical application of gesture recognition based on A-mode ultrasound. To tackle this problem, we propose the linearly enhanced training (LET) procedure to compensate for the deviation of gesture signals after forearm position changes. The training set does not contain the gesture data of the new position, so no additional training is required. Instead, we determine the scale parameters to construct enhanced features for the new positions by the original position gesture features. We tested the method on 10 gestures after the forearm angle is changed. Results show that the classification accuracy can be improved by 7.8% and 9.4% after the forearm bent and stretched 40° respectively. Since the LET procedure is a step between feature extraction and model construction, it is suitable for various features and algorithms, offering a multi-scene solution based on wearable A-mode ultrasound.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have