Abstract

Thanks to wearable devices joint with AI algorithms, it is possible to record and analyse physiological parameters such as heart rate variability (HRV) in ambulatory environments. The main downside to such setups is the bad quality of recorded data due to movement, noises, and data losses. These errors may considerably alter HRV analysis and should therefore be addressed beforehand, especially if used for medical diagnosis. One widely used method to handle such problems is interpolation, but this approach does not preserve the time dependence of the signal. In this study, we propose a new method for HRV processing including filtering and iterative data imputation using a Gaussian distribution. The particularity of the method is that many physiological aspects are taken into consideration, such as HRV distribution, RR variability, and normal boundaries, as well as time series characteristics. We study the effect of this method on classification using a random forest classifier (RF) and compare it to other data imputation methods including linear, shape-preserving piecewise cubic Hermite (pchip), and spline interpolation in a case study on stress. Features from reconstructed HRV signals of 67 healthy subjects using all four methods were analysed and separately classified by a random forest algorithm to detect stress against relaxation. The proposed method reached a stable F1 score of 61% even with a high percentage of missing data, whereas other interpolation methods reached approximately 54% F1 score for a low percentage of missing data, and the performance drops to about 44% when the percentage is increased. This suggests that our method gives better results for stress classification, especially on signals with a high percentage of missing data.

Highlights

  • Heart rate variability quantifies the fluctuations in the time intervals between successive heart beats (RR intervals)

  • Because heart rate variability (HRV) features derived from bad quality signals cannot be trusted for a reliable classification, especially if used for medical purposes, HRV signals should be carefully edited for data imputation and miscalculated RR interval (RRI) exclusion beforehand as emphasised by many studies [5–7]

  • We propose a new approach for HRV processing and we measure its impact on stress classification, as classification is the ultimate goal

Read more

Summary

Introduction

Heart rate variability quantifies the fluctuations in the time intervals between successive heart beats (RR intervals). The analysis of HRV can provide insights into autonomic nervous function and information about the sympathetic–parasympathetic balance and cardiovascular health [1]. Thanks to machine learning algorithms and wearable biosensors, HRV is widely used today as an indicator of different physiological states and pathologies such as mental stress [2,3]. HRV data collection is relatively easy, noninvasive, and inexpensive, which makes it valuable and very popular for ambulatory health monitoring [4]. HRV can be extracted from either ECG or PPG sensors that are widely available today. Whereas HRV analysis requires accurate RR interval (RRI) time series including only pure sinus beats, wearable type ECG and PPG devices readily generate artifacts and important data loss, which cause gaps and abnormal RR intervals. Because HRV features derived from bad quality signals cannot be trusted for a reliable classification, especially if used for medical purposes, HRV signals should be carefully edited for data imputation and miscalculated RRI exclusion beforehand as emphasised by many studies [5–7]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call