Abstract

The accurate detection of physiologically-related events in photopletismographic (PPG) and phonocardiographic (PCG) signals, recorded by wearable sensors, is mandatory to perform the estimation of relevant cardiovascular parameters like the heart rate and the blood pressure. However, the measurement performed in uncontrolled conditions without clinical supervision leaves the detection quality particularly susceptible to noise and motion artifacts. This work proposes a new fully-automatic computational framework, based on convolutional networks, to identify and localize fiducial points in time as the foot, maximum slope and peak in PPG signal and the S1 sound in the PCG signal, both acquired by a custom chest sensor, described recently in the literature by our group. The event detection problem was reframed as a single hybrid regression-classification problem entailing a custom neural architecture to process sequentially the PPG and PCG signals. Tests were performed analysing four different acquisition conditions (rest, cycling, rest recovery and walking). Cross-validation results for the three PPG fiducial points showed identification accuracy greater than 93 % and localization error (RMSE) less than 10 ms. As expected, cycling and walking conditions provided worse results than rest and recovery, however reaching an accuracy greater than 90 % and a localization error less than 15 ms. Likewise, the identification and localization error for S1 sound were greater than 90 % and less than 25 ms. Overall, this study showcased the ability of the proposed technique to detect events with high accuracy not only for steady acquisitions but also during subject movements. We also showed that the proposed network outperformed traditional Shannon-energy-envelope method in the detection of S1 sound, reaching detection performance comparable to state of the art algorithms. Therefore, we argue that coupling chest sensors and deep learning processing techniques may disclose wearable devices to unobtrusively acquire health information, being less affected by noise and motion artifacts.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.