Abstract

Fatigue detection for drivers in public transportation is crucial. To effectively detect the driver's fatigue state, we investigated the deep learning-based fatigue detection method and proposed a multimodality signal fatigue detection method. In the proposed method, the convolutional autoencoder (CAE) is used to fuse electroencephalogram (EEG) and electrooculography (EOG) signal features, and the convolutional neural network (CNN) is used to maintain spatial locality. After that, the fusion features are input into the recurrent neural network (RNN) for fatigue recognition. We tested the proposed algorithmic framework on the SEED-VIG dataset and evaluated it using two statistical indicators, root mean square error (RMSE) and correlation coefficient (COR), achieving the mean RMSE/COR of 0.10/0.93 and 0.11/0.88 on single modality EOG and EEG features, respectively, and improved performance to 0.08/0.96 on multimodality features. In addition, this paper analyzes the effect of different signal features on recognition results, and the comparison illustrates that the performance of the model using multimodality features is better than that of signle modality features. The experimental results show that the algorithm framework proposed in this paper outperforms other recognition algorithms, which also proves the effectiveness of the algorithm applied to fatigue driving detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call