In recent years, it has been observed that there is an increasing rate of road accidents due to the low vigilance of drivers. Thus, the estimation of drivers' vigilance state plays a significant role in public transportation safety. We have adopted a feature fusion strategy that combines the electroencephalogram (EEG) signals collected from various sites of the human brain, including forehead, temporal, and posterior and forehead electrooculogram (forehead-EOG) signals, to address this factor. The level of vigilance is predicted through a new learning model known as double-layered neural network with subnetwork nodes (DNNSNs), which comprises several subnetwork nodes, and each node in turn is composed of many hidden nodes that have various capabilities of feature selection (dimension reduced), feature learning, etc. The proposed single modality that uses only forehead-EOG signal exhibits a mean root-mean-square error (RMSE) of 0.12 and a mean Pearson product-moment correlation coefficient (COR) of 0.78. On one hand, an EEG signal achieved a mean RMSE of 0.13 and a mean COR of 0.72. Whereas, on the other, the proposed multimodality achieved values of 0.09 and 0.85 for the mean RMSE and the mean COR, respectively. Experimental results show that the proposed DNNSN with multimodality fusion outperforms the model with single modality for vigilance estimation due to the complementary information between forehead-EOG and EEG. After a favorable learning rate was applied to the input layer, the mean RMSE/COR improved to 0.11/0.79, 0.12/0.74, and 0.08/0.86, respectively. Hence, this quantitative analysis proves that the proposed method provides better feasibility and efficiency learning capability and surmounts other state-of-the-art techniques.
Read full abstract