Abstract

A great interest is focused on driver assistance systems using the head pose as an indicator of the visual focus of attention and the mental state. In fact, the head pose estimation is a technique allowing to deduce head orientation relatively to a view of camera and could be performed by model-based or appearance-based approaches. Model-based approaches use a face geometrical model usually obtained from facial features, whereas appearance-based techniques use the whole face image characterized by a descriptor and generally consider the pose estimation as a classification problem. Appearance-based methods are faster and more adapted to discrete pose estimation. However, their performance depends strongly on the head descriptor, which should be well chosen in order to reduce the information about identity and lighting contained in the face appearance. In this paper, we propose an appearance-based discrete head pose estimation aiming to determine the driver attention level from monocular visible spectrum images, even if the facial features are not visible. Explicitly, we first propose a novel descriptor resulting from the fusion of four most relevant orientation-based head descriptors, namely the steerable filters, the histogram of oriented gradients (HOG), the Haar features, and an adapted version of speeded up robust feature (SURF) descriptor. Second, in order to derive a compact, relevant, and consistent subset of descriptor’s features, a comparative study is conducted on some well-known feature selection algorithms. Finally, the obtained subset is subject to the classification process, performed by the support vector machine (SVM), to learn head pose variations. As we show in experiments with the public database (Pointing’04) as well as with our real-world sequence, our approach describes the head with a high accuracy and provides robust estimation of the head pose, compared to state-of-the-art methods.

Highlights

  • The increasing number of traffic accidents in the last years becomes a serious problem

  • 4 Experimental results Since there is no public database containing various driver head poses, we have acquired video sequences representing a driver in different head poses to perform our experiment

  • 5 Conclusions In this paper, we have proposed a head pose estimation approach using a single camera in order to identify driver inattention

Read more

Summary

Introduction

The increasing number of traffic accidents in the last years becomes a serious problem. Automotive manufactures and researcher laboratories are contributing to this important mission Some preventive systems such as alcohol test and speed measurement radar are deployed to reduce the number of traffic accidents, but it is obvious that hypovigilance remains one of the most principal. (iii) Approaches based on physical signals utilize image processing techniques to measure the driver vigilance level reflected through the driver’s face appearance and head/facial feature activity. These techniques are based principally on studying facial features, especially eye state [3,4,5], head pose [6, 7], or mouth state [8]. It is a big challenge to monitor the driver vigilance level using a single VS camera without depth information and IR information

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call