Abstract
This paper describes an approach for automatic detection and localization of drivers and passengers in automobiles using in-cabin images. We used a convolutional neural network (CNN) framework and conducted experiments based on the Faster R-CNN and Cascade R-CNN detectors. Training and evaluation were performed using the Second Strategic Highway Research Program (SHRP 2) naturalistic dataset. In SHRP 2, the cabin images have been blurred to maintain privacy. After detecting occupants inside the vehicle, the system classifies each occupant as driver, front-seat passenger, or back-seat passenger. For one SHRP 2 test set, the system detected occupants with an accuracy of 94.5%. Those occupants were correctly classified as front-seat passenger with an accuracy of 97.3%, as driver with 99.5% accuracy, and as back-seat passenger with 94.3% accuracy. The system performed slightly better for daytime images than for nighttime images. Unlike previous work, this method is capable of presence classification and location prediction of occupants. By fine-tuning the object detection model, there is also significant improvement in detection accuracy as compared with pretrained models. The study also provides a fully annotated dataset of in-cabin images. This work is expected to facilitate research involving interactions between drivers and passengers, particularly related to driver attention and safety.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Transportation Research Record: Journal of the Transportation Research Board
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.