This work presents a learning-based approach to respiratory surface electromyography (sEMG) quality evaluation. For this purpose, we define the signal-to-disturbance ratio (SDR), which quantifies the relative impact of disturbances on affected signal components. The SDR values for different disturbance types are estimated based on signal and disturbance characteristics and serve as a measure of signal quality. For the multivariate regression task, a fully connected neural network with three layers is trained based on standard and handcrafted signal features. The features are extracted before and after removing cardiac artifacts. This integration of domain-specific knowledge enables us to leverage shallow neural networks, which contributes to the interpretability of the method. For training and testing, artificially disturbed signals are generated based on undisturbed clinical sEMG recordings of mechanically ventilated patients and different disturbance models. This includes single and combined disturbances with different SDRs. The results show that the root-mean-square error (RMSE) of the estimated to the applied SDR regarding powerline (2.3dB) and spike-like disturbances (2.1dB) is smaller than the RMSE referring to high-frequency disturbances (3.6dB) and motion artifacts (5.8dB). Our findings also indicate that the SDR is estimated similarly well regardless of whether one or more contaminants are present simultaneously. Overall, the SDR can be determined with an RMSE of 3.8dB and a correlation coefficient of 0.97. This work contributes to automatically quantifying the impact of disturbances and objectively assessing the quality of sEMG signals of the respiratory muscles.
Read full abstract