Abstract

Single-modal data has a limitation on fatigue detection, while the shortage of labeled data is pervasive in multimodal sensing data. Besides, it is a time-consuming task for board-certified experts to manually annotate the physiological signals, especially hard for EEG sensor data. To solve this problem, we propose FedUSL (Federated Unified Space Learning), a federated annotation method for multimodal sensing data in the driving fatigue detection scenario, which has the innate ability to exploit more than four multimodal data simultaneously for correlations and complementary with low complexity. To validate the efficiency of the proposed method, we first collect the multimodal data (aka, camera, physiological sensor) through simulated fatigue driving. The data is then preprocessed and features are extracted to form a usable multimodal dataset. Based on the dataset, we analyze the performance of the proposed method. The experimental results demonstrate that FedUSL outperforms other approaches for driver fatigue detection with carefully selected modal combinations, especially when a modality contains only \(10\% \) labeled data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call