Abstract

Human Pose Estimation from images allows the recognition of key daily activity patterns in Smart Environments. Current State-of-the-art (SOTA) 3D pose estimators are built on visible spectrum images, which can lead to privacy concerns in Ambient Assisted Living solutions. Thermal Vision sensors are being deployed in these environments, as they preserve privacy and operate in low brightness conditions. Furthermore, multi-view setups provide the most accurate 3D pose estimation, as the occlusion problem is overcome by having images from different perspectives. Nevertheless, no solutions in the literature use thermal vision sensors following a multi-view scheme.In this work, a multi-view setup consisting of low-cost devices is deployed in the Smart Home of the University of Almería. Thermal and visible images are paired using homography, and SOTA solutions such as YOLOv3 and Blazepose are used to annotate the bounding box and 2D pose in the thermal images. ThermalYOLO is built by fine-tuning YOLOv3 and outperforms YOLOv3 by 5% in bounding box recognition and by 1% in IoU value. Furthermore, InceptionResNetV2 is found as the most appropriate architecture for 2D pose estimation.Finally, a 3D pose estimator was built comparing input approaches and convolutional architectures. Results show that the most appropriate architecture is having three single-channel thermal images processed by independent convolutional backbones (ResNet50 in this case). After these, the output is fused with the 2D poses. The resulting convolutional neural network shows excellent behaviour when having occlusions, outperforming single-view SOTA approaches in the visible spectrum.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call