Multimodal research is an emerging field of artificial intelligence, and the analysis of dangerous driving behavior is one of the main application scenarios in the field of multimodal fusion. Aiming at the problem of data heterogeneity in the process of behavior classification by multimodal fusion, this paper proposes a low-rank multimodal data fusion method, which utilizes the complementarity between data modalities of different dimensions in order to classify and identify dangerous driving behaviors. This method uses tensor difference matrix data to force low-rank fusion representation, improves the verification efficiency of dangerous driving behaviors through multi-level abstract tensor representation, and solves the problem of output data complexity. A recurrent network based on the attention mechanism, AR-GRU, updates the network input parameter state and learns the weight parameters through its gated structure. This model improves the dynamic connection between modalities on heterogeneous threads and reduces computational complexity. Under low-rank conditions, it can quickly and accurately classify and identify dangerous driving behaviors and give early warnings. Through a large number of experiments, the accuracy of this method is improved by an average of 1.76% compared with the BiLSTM method and the BiGRU-IAAN method in the training and verification of the self-built dataset.
Read full abstract