Human Pose Estimation (HPE) is a crucial step towards understanding people in images and videos. HPE provides geometric and motion information of the human body, which has been applied to a wide range of applications (e.g., human-computer interaction, motion analysis, augmented reality, virtual reality, healthcare, etc.). An extremely useful task of this kind is the 2D pose estimation of bedridden patients from infrared (IR) images. Here, the IR imaging modality is preferred due to privacy concerns and the need for monitoring both uncovered and covered patients at different levels of illumination. The major drawback of this research problem is the unavailability of covered examples, which are very costly to collect and time-consuming to label. In this work, a deep learning-based framework was developed for human sleeping pose estimation on covered images using only the uncovered training images. In the training scheme, two different image augmentation techniques, a statistical approach as well as a GAN-based approach, were explored for domain adaptation, where the statistical approach performed better. The accuracy of the model trained on the statistically augmented dataset was improved by 124 % as compared with the model trained on non-augmented images. To handle the scarcity of training infrared images, a transfer learning strategy was used by pre-training the model on an RGB pose estimation dataset, resulting in a further increment in accuracy of 4 %. Semi-supervised learning techniques, with a novel pose discriminator model in the loop, were adopted to utilize the unannotated training data, resulting in a further 3 % increase in accuracy. Thus, significant improvement has been shown in the case of 2D pose estimation from infrared images, with a comparatively small amount of annotated data and a large amount of unannotated data by using the proposed training pipeline powered by heavy augmentation.
Read full abstract