Abstract

In recent years, it has been observed that there is an increasing rate of road accidents due to the low vigilance of drivers. Thus, the estimation of drivers' vigilance state plays a significant role in public transportation safety. We have adopted a feature fusion strategy that combines the electroencephalogram (EEG) signals collected from various sites of the human brain, including forehead, temporal, and posterior and forehead electrooculogram (forehead-EOG) signals, to address this factor. The level of vigilance is predicted through a new learning model known as double-layered neural network with subnetwork nodes (DNNSNs), which comprises several subnetwork nodes, and each node in turn is composed of many hidden nodes that have various capabilities of feature selection (dimension reduced), feature learning, etc. The proposed single modality that uses only forehead-EOG signal exhibits a mean root-mean-square error (RMSE) of 0.12 and a mean Pearson product-moment correlation coefficient (COR) of 0.78. On one hand, an EEG signal achieved a mean RMSE of 0.13 and a mean COR of 0.72. Whereas, on the other, the proposed multimodality achieved values of 0.09 and 0.85 for the mean RMSE and the mean COR, respectively. Experimental results show that the proposed DNNSN with multimodality fusion outperforms the model with single modality for vigilance estimation due to the complementary information between forehead-EOG and EEG. After a favorable learning rate was applied to the input layer, the mean RMSE/COR improved to 0.11/0.79, 0.12/0.74, and 0.08/0.86, respectively. Hence, this quantitative analysis proves that the proposed method provides better feasibility and efficiency learning capability and surmounts other state-of-the-art techniques.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.