Abstract

This paper presents a recurrent learning-based facial attribute recognition method that mimics human observers' visual fixation. The concentrated views of a human observer while focusing and exploring parts of a facial image over time are generated and fed into a recurrent network. The network makes a decision concerning facial attributes based on the features gleaned from the observer's visual fixations. Experiments on facial expression, gender, and age datasets show that applying visual fixation to recurrent networks improves recognition rates significantly. The proposed method not only outperforms state-of-the-art recognition methods based on static facial features, but also those based on dynamic facial features.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call