Abstract

Photoplethysmography imaging (PPGI) for non-contact monitoring of preterm infants in the neonatal intensive care unit (NICU) is a promising technology, as it could reduce medical adhesive-related skin injuries and associated complications. For practical implementations of PPGI, a region of interest has to be detected automatically in real time. As the neonates’ body proportions differ significantly from adults, existing approaches may not be used in a straightforward way, and color-based skin detection requires RGB data, thus prohibiting the use of less-intrusive near-infrared (NIR) acquisition. In this paper, we present a deep learning-based method for segmentation of neonatal video data. We augmented an existing encoder-decoder semantic segmentation method with a modified version of the ResNet-50 encoder. This reduced the computational time by a factor of 7.5, so that 30 frames per second can be processed at 960 × 576 pixels. The method was developed and optimized on publicly available databases with segmentation data from adults. For evaluation, a comprehensive dataset consisting of RGB and NIR video recordings from 29 neonates with various skin tones recorded in two NICUs in Germany and India was used. From all recordings, 643 frames were manually segmented. After pre-training the model on the public adult data, parts of the neonatal data were used for additional learning and left-out neonates are used for cross-validated evaluation. On the RGB data, the head is segmented well (82% intersection over union, 88% accuracy), and performance is comparable with those achieved on large, public, non-neonatal datasets. On the other hand, performance on the NIR data was inferior. By employing data augmentation to generate additional virtual NIR data for training, results could be improved and the head could be segmented with 62% intersection over union and 65% accuracy. The method is in theory capable of performing segmentation in real time and thus it may provide a useful tool for future PPGI applications.Graphical This work presents the development of a customized, real-time capable Deep Learning architecture for segmenting of neonatal videos recorded in the intensive care unit. In addition to hand-annotated data, transfer learning is exploited to improve performance.

Highlights

  • According to the World Health Organization, 15 million babies [1] are born prematurely each year and lack a fully developed biological and physiological system

  • Contact-based sensors imply the risk of injuries, such as “medical adhesive-related skin injuries” (MARSI) which is a serious problem for preterm infants patients with vulnerable and fragile skin [4]

  • The method is capable of performing segmentation in future real time for PPGI applications

Read more

Summary

Introduction

According to the World Health Organization, 15 million babies [1] are born prematurely each year and lack a fully developed biological and physiological system. Besides the neurodevelopmental problems that are highly associated with this type of patients, the functional immaturity of the various organs and their regularization mechanisms commonly lead to complications [2]. These can result in irregular cardiorespiratory patterns which can lead to clinical complications [3]. It is crucial to perform continuous monitoring of cardiovascular signals as changes are often observed prior to major complications. State-of-the-art physiological monitoring of neonates involves skin-attached sensors, e.g., electrocardiography (ECG) electrodes, pulse oximeters, or temperature probes in combination with the respective wires. Contact-based sensors imply the risk of injuries, such as “medical adhesive-related skin injuries” (MARSI) which is a serious problem for preterm infants patients with vulnerable and fragile skin [4]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call