Image classification is increasingly being utilized on construction sites to automate project monitoring, driven by advancements in reality-capture technologies and artificial intelligence (AI). Deploying real-time applications remains a challenge due to the limited computing resources available on-site, particularly on remote construction sites that have limited telecommunication support or access due to high signal attenuation within a structure. To address this issue, this research proposes an efficient edge-computing-enabled image classification framework for support of real-time construction AI applications. A lightweight binary image classifier was developed using MobileNet transfer learning, followed by a quantization process to reduce model size while maintaining accuracy. A complete edge computing hardware module, including components like Raspberry Pi, Edge TPU, and battery, was assembled, and a multimodal software module (incorporating visual, textual, and audio data) was integrated into the edge computing environment to enable an intelligent image classification system. Two practical case studies involving material classification and safety detection were deployed to demonstrate the effectiveness of the proposed framework. The results demonstrated the developed prototype successfully synchronized multimodal mechanisms and achieved zero latency in differentiating materials and identifying hazardous nails without any internet connectivity. Construction managers can leverage the developed prototype to facilitate centralized management efforts without compromising accuracy or extra investment in computing resources. This research paves the way for edge “intelligence” to be enabled for future construction job sites and promote real-time human-technology interactions without the need for high-speed internet.
Read full abstract