Abstract
Computer vision field has achieved great success in interpreting semantic meanings from images, yet its algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. Among these tasks is in-bed human pose monitoring with significant value in many healthcare applications. In-bed pose monitoring in natural settings involves pose estimation in complete darkness or full occlusion. The lack of publicly available in-bed pose datasets hinders the applicability of many successful human pose estimation algorithms for this task. In this paper, we introduce our Simultaneously-collected multimodal Lying Pose (SLP) dataset, which includes in-bed pose images from 109 participants captured using multiple imaging modalities including RGB, long wave infrared (LWIR), depth, and pressure map. We also present a physical hyper parameter tuning strategy for ground truth pose label generation under adverse vision conditions. The SLP design is compatible with the mainstream human pose datasets; therefore, the state-of-the-art 2D pose estimation models can be trained effectively with the SLP data with promising performance as high as 95% at PCKh@0.5 on a single modality. The pose estimation performance of these models can be further improved by including additional modalities through the proposed collaborative scheme.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.