Abstract

Most Driver Status Monitoring (DSM) systems consist of spatial features extractions and temporal status recognition in sequence. Extracting the spatial features, which include driver facial behavior such as eye closing and mouth opening, generally requires considerable computation. It causes DSM systems to lose the valuable instantly-occurring driver facial information during real-time processing. Loss of facial information affects the accuracy of the system, and its impact is more severe on restricted computing resources. To solve this problem, this paper proposes an Adaptive Batch-Image (ABI) based DSM (ABI-DSM) system. The ABI enables the DSM system to use images captured in real-time while the DSM process previous input images. For real-time operation on a lightweight GPU-equipped Single-Board Computer (SBC), the ABI-DSM system is designed as follows. First, the system uses the driver's facial behavior to reduce the dimension of the time-series data for recognizing the status of the driver. The second, detection and tracking of driver's faces are not used for facial behavior recognition. Also, the system works with PydMobileNet, which has lower parameters and FLOPs than MobileNetV2, for facial behavior recognition. Experiments show that the ABI-DSM systems based on MobileNetV2 and PydMobileNet perform better than others in terms of both FPS and Precision. In particular, the PydMobileNet-based ABI-DSM system outperforms competitors when the size of Batch-Image is over six images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call