Abstract

AbstractThe advent of affordable jobsite cameras is reshaping the way on-site construction activities are monitored. To facilitate the analysis of large collections of videos, research has focused on addressing the problem of manual workface assessment by recognizing worker and equipment activities using computer-vision algorithms. Despite the explosion of these methods, the ability to automatically recognize and understand worker and equipment activities from videos is still rather limited. The current algorithms require large-scale annotated workface assessment video data to learn models that can deal with the high degree of intraclass variability among activity categories. To address current limitations, this study proposes crowdsourcing the task of workface assessment from jobsite video streams. By introducing an intuitive web-based platform for massive marketplaces such as Amazon Mechanical Turk (AMT) and several automated methods, the intelligence of the crowd is engaged for interpreting jobsite vid...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call