Abstract

For reliable human-robot interaction, the robot must know the person's action in order to plan the appropriate way to interact or assist the person. As part of the pre-processing stage of action recognition, the robot also needs to recognize the various body parts and posture of the person. But estimation of posture and body parts is challenging due to the articulated nature of the human body and the huge intra-class variations. To address this challenge, we propose two schemes using Hierarchical-ELM (H-ELM) for posture detection into either upright or non-upright posture. In the first scheme, we follow a whole body detector approach, where a H-ELM classifier is trained on several whole body postures. In the second scheme, we follow a body part detection approach, where separate H-ELM classifiers are detected for each body part. Using the detected body parts a final decision is made on the posture of the person. We have conducted several experiments to compare the performance of both approaches under different scenarios like view angle changes, occlusion etc. Our experimental results show that body part H-ELM based posture detection works better than other proposed framework even in the presence of occlusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call