We propose a novel framework for detecting 3D human–object interactions (HOI) in construction sites and a toolkit for generating construction-related human–object interaction graphs. Computer vision methods have been adopted for construction site safety surveillance in recent years. The current computer vision methods rely on videos and images, with which safety verification is performed on common-sense knowledge, without considering 3D spatial relationships among the detected instances. We propose a new method to incorporate spatial understanding by directly inferring the interactions from 3D point cloud data. The proposed model is trained on a 3D construction site dataset generated from our crafted simulation toolkit. The model achieves 54.11% mean interaction over union (mIOU) and 72.98% average mean precision(mAP) for the worker–object interaction relationship recognition. The model is also validated on PiGraphs, a benchmarking dataset with 3D human–object interaction types, and compared against other existing 3D interaction detection frameworks. It was observed that it achieves superior performance from the state-of-the-art model, increasing the interaction detection mAP by 17.01%. Besides the 3D interaction model, we also simulate interactions from industrial surveillance footage using MoCap and physical constraints, which will be released to foster future studies in the domain.