Abstract

Driver activity engagement while driving plays a vital role that leads to negative outcomes of driving safety. To reduce traffic accidents and ensure driving safety, real-time driver activity recognition architecture is proposed in this study. Specifically, a total of eight kinds of common driving-related activities are identified, which include the normal driving, left or right checking, texting, answering the phone, using media, drinking, and picking up objects. Raw experiment videos are collected via onboard monocular cameras, which are used for the upper body skeleton information extraction of the driver. Then, the graph convolutional networks (GCN) are constructed for spatial structure feature reasoning in a single frame, which is consecutively followed by long short-term memory (LSTM) networks for temporal motion feature learning within the sequence. Moreover, the attention mechanism is further utilised to emphasise the keyframes to select discriminative sequential information. Finally, a large-scale driver activity dataset, consisting of both naturalistic driving data and simulative driving data, is collected for model training and evaluations. Experimental results show that the general recall ratio of those eight driving-related activities reaches up to 88.8% and the real-time recognition efficiency can reach up to 24 fps, which would satisfy the real-time requirements of engineering applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call