Abstract

PurposeContaminants in surface electromyography (sEMG) recordings might configure an issue if not kept at lower levels since they can impair the extraction of information. In this context, several approaches have been proposed to minimize the effects of specific contaminants. However, knowing the type of interference is important to improve the system efficiency and avoid deformation in the EMG signal by unnecessary filtering. Thereby, this paper proposes a new strategy to recognize and minimize four contaminants commonly found in sEMG recordings (motion artifact, electrocardiography, powerline interference, and additive white Gaussian noise). MethodsAn Actor-Critic Reinforcement Learning model with a Fuzzy Inference System (FIS)-based reward function (FIS-ACRL) was designed for contaminant identification and removal. The ACRL model consists of an environment (sEMG), a state (represented by a set of six handcrafted features), a set of actions (four filters/methodologies to remove each contaminant), and an actor and a critic (formed by two neural networks). A reward is assigned to the agent actions through a FIS, where the inputs are determined according to the impact of the action in the features, and the defuzzified output configures a score that is, in turn, converted to the proper reward. ResultsThe ACRL model evaluation was through a supervised experiment (the reward assigning was from the correct label), achieving an overall median accuracy of 93.13% at classifying the four contaminants with Signal-to-Noise Ratio (SNR) ranging from −30 to 10 dB in steps of 10 dB. The FIS-ACRL performance assessment was through an unsupervised experiment in the same dataset. It was obtained 92.60% of median accuracy, outperforming three typical clustering algorithms (k-Means, Self-Organizing Map (SOM)-k-Means, and SOM-Ward). ConclusionThe results validate the proposed strategy, showing that it is possible to identify the contaminant type through unsupervised and continuous learning, besides automatically executing the correct procedure to minimize it. Moreover, the nature of ACRL theory enables the continuous adaptation of the agent learning over the environment changes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call