Abstract
ABSTRACTMotion artifact removal is a critical issue in functional near‐infrared spectroscopy (fNIRS) analysis tasks, with traditional methods relying heavily on expert‐based knowledge and optimal selection of model parameters within brain regions. In this paper, we propose a deep learning denoising model based on long short‐term memory (LSTM)‐autoencoder (viz., LSTM‐AE) to reduce motion artifacts. By training a neural network to reconstruct hemodynamic response coupled with neuronal activity, LSTM‐AE achieves positive denoising results on both our synthesized noisy simulated dataset and the real dataset. The LSTM‐AE processes the raw fNIRS in three phases: (1) Morphological feature extraction of the raw fNIRS is conducted through the encoder module. (2) The LSTM module captures temporal correlations between individual samples to enhance features. (3) The decoder module recovers and reconstructs the morphological feature information of fNIRS from the latent space. Finally, clean reconstructed fNIRS is generated at the output layer. We compare our proposed method with existing calibration algorithms for hemodynamic response estimation using the following metrics: mean square error (MSE), Pearson's correlation (R2), signal‐to‐noise ratio (SNR), and percent deviation ratio (PDR). The proposed LSTM‐AE method outperforms conventional methods, demonstrating an improvement in all these metrics. Additionally, the proposed LSTM‐AE method shows statistically significant differences from other motion artifact algorithms in terms of effectiveness (p < 0.01, significance level α = 0.05). This study demonstrates the potential of deep network architectures to remove motion artifacts in fNIRS data.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have