Abstract

This paper presents a new method for activity analysis of construction workers using inexpensive RGB+depth sensors. This is an important task as no current workface assessment method can provide detailed and continuous information to help project managers identify bottlenecks affecting labors’ productivity. Previous work using RGB-D images focuses on action recognition form short video sequences wherein only one action is represented within each video. Automating this analysis for long sequences of RGB-D images is challenging since the start and the end of each action is unknown, recognizing single actions is still challenging, and there are no datasets and validation metrics to evaluate algorithms. Given an input sequence of RGB-D images, our algorithm divides it into temporal segments and automatically classifies the observed actions. To do so, the algorithm first detects body postures in real-time. Then a Kernel Density Estimation Model (KDE) is trained to model classification scores from discriminatively-trained bag-of-poses action classifiers. Furthermore, a Hidden Markov Model (HMM) labels sequences of actions that are most discriminative. The performance of our model is tested on unprecedented datasets of actual drywall construction operations. Experimental results, in addition to the perceived benefits and limitations of the proposed method are discussed in detail.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call