Passenger ships have complex transportation systems and seafarers face high workloads, making them susceptible to serious injuries and fatalities in the event of accidents. Existing unimodal workload recognition for seafarers mainly focuses on fixed load induction in bridge simulators, whereas a multimodal approach using multi-sensor data fusion can overcome the reliability and sensitivity limitations of a single sensor. To accurately identify the workload of seafarers, we propose a machine learning-based multimodal fusion method at the feature layer and utilise the Gini index to determine the feature weight of the multimodal data. Through a real ship navigation experiment, the subjective workload assessment technique (SWAT) was employed to collect the continuous workload scores of 24 seafarers in daily tasks. Further, the Dempster–Shafer evidence theory was used to integrate these scores with the unsafe behavior probability of seafarers to obtain a calibrated workload. Electroencephalogram (EEG), electrocardiogram (ECG), and electrodermal activity (EDA) signals were collected in real time, and a high-dimensional feature matrix was extracted to construct the workload recognition model. Random forest, XGBoost, and back-propagation neural networks were used to establish multimodal fusion workload recognition models at the feature-fusion stage, and the model performances were compared. The results showed that the multimodal fusion based on EEG, ECG, and EDA had an excellent recognition accuracy. The XGBoost algorithm has better performance with an accuracy of 85.72%, which is an increment of 9.49% compared to that of the unimodal algorithm, and this improvement passed the statistical significance test. Important features suitable for multimodal fusion recognition were also analysed. The findings of this study have profound significance for enhancing risk control measures pertaining to human factors and ensuring navigation safety.
Read full abstract