Abstract

Background:The remote measurement of physiological signals from video has gained a particular attention over the last past years. Estimating cardiovascular parameters like oxygen saturation and arterial blood pressure (BP) is covered by a limited volume of studies and remain a very challenging issue. Recent attempts demonstrated that BP can be estimated from facial video but under very controlled scenarios or with moderate performances. The data used in these works have not been publicly released or were gathered in a clinical setting. Methods:We, in contrast, propose a framework for estimating BP from publicly available data in order to allow replication and to facilitate fair comparison. We developed and trained a deep U-shaped neural network to recover the blood pressure waveform from its imaging photoplethysmographic (iPPG) signal counterpart. The model predicts the continuous wavelet transform (CWT) representation of a BP signal from the CWT of an iPPG signal. Inverse CWT transform is ultimately computed to recover the BP time series. Results:The proposed framework has been evaluated on 57 participants using international standards developed by the AAMI and the BHS. Results exhibit close agreement with ground truth BP values. The method satisfies all standards in the estimation of mean and diastolic BP (grade A) and nearly all standards in the estimation of systolic BP (grade B). Conclusions:This is, to the best of our knowledge, the first demonstration of a deep learning-oriented framework that manages to predict the continuous blood pressure waveform from facial video analysis. Codes developed during the study are publicly available (https://github.com/frederic-bousefsaf/ippg2bp).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call