In recent years, health-monitoring systems have become increasingly important in the medical and safety fields, including patient and driver monitoring. Remote photoplethysmography is an approach that captures blood flow changes due to cardiac activity by utilizing a camera to measure transmitted or reflected light through the skin, but it has limitations in its sensitivity to changes in illumination and motion. Moreover, remote photoplethysmography signals measured from nonskin regions are unreliable, leading to inaccurate remote photoplethysmography estimation. In this study, we propose Skin-SegNet, a network that minimizes noise factors and improves pulse signal quality through precise skin segmentation. Skin-SegNet separates skin pixels and nonskin pixels, as well as accessories such as glasses and hair, through training on facial structural elements and skin textures. Additionally, Skin-SegNet reduces model parameters using an information blocking decoder and spatial squeeze module, achieving a fast inference time of 15 ms on an Intel i9 CPU. For verification, we evaluated Skin-SegNet using the PURE dataset, which consists of heart rate measurements from various environments. When compared to other skin segmentation methods with similar inference speeds, Skin-SegNet demonstrated a mean absolute percentage error of 1.18%, showing an improvement of approximately 60% compared to the 4.48% error rate of the other methods. The result even exhibits better performance, with only 0.019 million parameters, in comparison to DeepLabV3+, which has 5.22 million model parameters. Consequently, Skin-SegNet is expected to be employed as an effective preprocessing technique for facilitating efficient remote photoplethysmography on low-spec computing devices.
Read full abstract