Abstract

Purpose This study aims to develop a real-time algorithm, which can detect people even in arbitrary poses. To cover poor and changing light conditions, it does not rely on color information. The developed method is expected to run on computers with low computational resources so that it can be deployed on autonomous mobile robots. Design/methodology/approach The method is designed to have a people detection pipeline with a series of operations. Efficient point cloud processing steps with a novel head extraction operation provide possible head clusters in the scene. Classification of these clusters using support vector machines results in high speed and robust people detector. Findings The method is implemented on an autonomous mobile robot and results show that it can detect people with a frame rate of 28 Hz and equal error rate of 92 per cent. Also, in various non-standard poses, the detector is still able to classify people effectively. Research limitations/implications The main limitation would be for point clouds similar to head shape causing false positives and disruptive accessories (like large hats) causing false negatives. Still, these can be overcome with sufficient training samples. Practical implications The method can be used in industrial and social mobile applications because of its robustness, low resource needs and low power consumption. Originality/value The paper introduces a novel and efficient technique to detect people in arbitrary poses, with poor light conditions and low computational resources. Solving all these problems in a single and lightweight method makes the study fulfill an important need for collaborative and autonomous mobile robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call