ABSTRACTFatigue detection holds paramount importance in the timely identification of safety hazards. Nonetheless, prevailing fatigue detection methodologies often overlook the diverse spectrum of fatigue features or temporal cues. To address this lacuna, we introduce fatigue detection based on blood volume pulse signal and multi‐physical features (FDBVPS‐MF). Initially, a non‐invasive technique is employed to extract the blood volume pulse signal (BVPS) from the forehead region, which is subsequently fed into a one‐dimensional convolutional neural network (1D CNN) to formulate a fatigue detection model based on BVPS. Concurrently, features such as percentage of eyelid closure (PERCLOS), blink frequency (BF), and maximum closing time (MCT) are computed from eye images, and amalgamated with yawning frequency (YF) derived from mouth images to generate multi‐physical features (MF). MF is then input into the 1D CNN network to construct a fatigue detection model based on MF. Subsequently, employing weights, derived through Adaboosting, a fusion approach is executed to integrate the outputs of the two fatigue detection models, thus facilitating multi‐modal fatigue detection. On the UTA‐RLDD dataset, the proposed FDBVPS‐MF exhibits an accuracy and precision of 88.9% and 88.2%, respectively. Experimental findings substantiate the superior efficacy of FDBVPS‐MF over conventional methodologies.
Read full abstract