Abstract

Eye localization is undoubtedly crucial to acquiring large amounts of information. It not only helps people improve their understanding of others but is also a technology that enables machines to better understand humans. Although studies have reported satisfactory accuracy for frontal faces or head poses at limited angles, large head rotations generate numerous defects (e.g., disappearance of the eye), and existing methods are not effective enough to accurately localize eye centers. Therefore, this study makes three contributions to address these limitations. First, we propose a novel complete representation (CR) pipeline that can flexibly learn and generate two complete representations, namely the CR-center and CR-region, of the same identity. We also propose two novel eye center localization methods. This first method employs geometric transformation to estimate the rotational difference between two faces and an unknown-localization strategy for accurate transformation of the CR-center. The second method is based on image translation learning and uses the CR-region to train the generative adversarial network, which can then accurately generate and localize eye centers. Five image databases are employed to verify the proposed methods, and tests reveal that compared with existing methods, the proposed method can more accurately and robustly localize eye centers in challenging images, such as those showing considerable head rotation (both yaw rotation of -67.5° to +67.5° and roll rotation of +120° to -120°), complete occlusion of both eyes, poor illumination in addition to head rotation, head pose changes in the dark, and various gaze interaction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.