Abstract

Driver steering intention prediction provides an augmented solution to the design of an onboard collaboration mechanism between human driver and intelligent vehicle. In this study, a multi-task sequential learning framework is developed to predict future steering torques and steering postures based on upper limb neuromuscular electromyography signals. The joint representation learning for driving postures and steering intention provides an in-depth understanding and accurate modelling of driving steering behaviours. Regarding different testing scenarios, two driving modes, namely, both-hand and single-right-hand modes, are studied. For each driving mode, three different driving postures are further evaluated. Next, a multi-task time-series transformer network (MTS-Trans) is developed to predict the future steering torques and driving postures based on the multi-variate sequential input and the self-attention mechanism. To evaluate the multi-task learning performance and information-sharing characteristics within the network, four distinct two-branch network architectures are evaluated. Empirical validation is conducted through a driving simulator-based experiment, encompassing 21 participants. The proposed model achieves accurate prediction results on future steering torque prediction as well as driving posture recognition for both two-hand and single-hand driving modes. These findings hold significant promise for the advancement of driver steering assistance systems, fostering mutual comprehension and synergy between human drivers and intelligent vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call