Abstract

This paper presents a novel learning method for precise eye localization, a challenge to be solved in order to improve the performance of face processing algorithms. Few existing approaches can directly detect and localize eyes with arbitrary angels in predicted eye regions, face images, and original portraits at the same time. To preserve rotation invariant property throughout the entire eye localization framework, a codebook of invariant local features is proposed for the representation of eye patterns. A heat map is then generated by integrating a 2-class sparse representation classifier with a pyramid-like detecting and locating strategy to fulfill the task of discriminative classification and precise localization. Furthermore, a series of prior information is adopted to improve the localization precision and accuracy. Experimental results on three different databases show that our method is capable of effectively locating eyes in arbitrary rotation situations (360° in plane).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call