Abstract

Image based eye detection and gaze estimation have a wide range of potential applications, such as medical treatment, biometrics recognition, human-computer interaction. Though a large number of researchers have attempted to solve the two problems, they still exist some challenges due to the variation in appearance and lack of annotated images. In addition, most related work perform eye detection first, followed by gaze estimation via appearance learning. In this paper, we propose a unified framework to execute the gaze estimation and the eye detection simultaneously by learning the cascade regression models from appearance around the eye related key points. Intuitively, there is coupled relationship among location of eye center, shape of eye related key points, appearance representation and gaze information. To incorporate these information, at each cascade level, we first learn a model to map the shape and appearance around current eye related key points to the three dimension gaze update. Then, with the help of estimated gaze, we further learn a regression model to map the gaze, shape and appearance information to eye location update. By leveraging the power of cascade learning, the proposed method can alternatively optimize the two tasks of eye detection and gaze estimation. The experiments are conducted on benchmarks of GI4E and MPIIGaze. Experimental results show that our proposed method can achieve preferable results in gaze estimation and outperform the state-of-the-art methods in eye detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call