Abstract

Newborn patients in the neonatal intensive care unit (NICU) require continuous monitoring of vital signs. Non-contact patient monitoring is preferred in this setting, due to fragile condition of neonatal patients. Depth-based approaches for estimating the respiratory rate (RR) can operate effectively in conditions where an RGB-based method would typically fail, such as low-lighting or where a patient is covered with blankets. Many previously developed depth-based RR estimation techniques require careful camera placement with known geometry relative to the patient, or manual definition of a region of interest (ROI). We here present a framework for depth-based RR estimation where the camera position is arbitrary and the ROI is determined automatically and directly from the depth data. Camera placement is addressed through perspective transformation of the scene, which is accomplished by selecting a small number of registration points known to lie in the same plane. The chest ROI is determined automatically from examining the morphology of progressive depth slices in the corrected depth data. We demonstrate the effectiveness of this RR estimation pipeline using actual neonatal patient depth data collected from an RGB-D sensor. RR estimation accuracy is measured relative to gold standard RR captured from the bedside patient monitor. Perspective transformation is shown to be critical to effectively achieve automated ROI segmentation algorithm. Furthermore, the automated ROI segmentation algorithm is shown to improve both time- and frequency-domain based RR estimation accuracy. When combined, these pre-processing stages are shown to substantially improve the depth-based RR estimation pipeline, with a percentage of acceptable estimates (where the mean absolute error is less than 5 breaths per minute) increasing from 3.60% to 13.47% in the frequency domain and 6.12% to 8.97% in the time domain. Further development will focus on RR estimation from the perspective-corrected depth data and segmented ROI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call