Abstract
Construction equipment operations that require high levels of attention can cause mental fatigue, which can lead to inefficiencies and accidents. Previous studies classified mental fatigue using single-modal data with acceptable accuracy. However, mental fatigue is a multimodal problem, and no single modality is superior. Moreover, none of the previous studies in construction industry have investigated multimodal data fusion for classifying mental fatigue and whether such an approach would improve mental fatigue detection. This study proposes a novel approach using three machine learning models and multimodal data fusion to classify mental fatigue states. Electroencephalography, electrodermal activity, and video signals were acquired during an excavation operation, and the decision tree model using multimodal sensor data fusion outperformed other models with 96.2% accuracy and 96.175%–98.231% F1 scores. Multimodal sensor data fusion can aid in the development of a real-time system to classify mental fatigue and improve safety management at construction sites.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.