Abstract

Real-time driving distraction detection has garnered significant attention due to its potential to build various driving safety protections such as distraction warnings and driver assistance systems. Recent studies have focused on the development of neural networks for vision-based detection, though achieving a balance between performance and efficiency has proven challenging. In this paper, we propose a novel constrained attention (CA) mechanism for real-time driver distraction detection, which aims to achieve better performance meanwhile ensuring the computation efficiency. Specifically, we conduct some case studies by generating class activation maps to check the model attention, and three potential factors affecting performance are mined, which are ambiguous attention signal, excessive attention region, and similar attention between different classes. Two regularization terms are designed to optimize the three obstacles. Firstly, a concentrative regularization is introduced to limit the size of the attention region, meanwhile, pixels in the region have clear attention degrees. Secondly, an orthogonal regularization is proposed to optimize the attention of different classes to be discriminative. To further inspiring the model, we design an intersample constraint, which optimizes the attention of images with the same ground truth to be similar. Experiments are conducted on two driver distraction detection datasets, and the experimental results showed that our CA mechanism can bring significant performance improvement. More importantly, there will be no additional computational burden when the trained model is deployed in actual scenarios. Codes are released at https://github.com/gaohangcodes/CAN4DDD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call