Abstract
This paper proposes a novel entropy-weighted Gabor-phase congruency (EWGP) feature descriptor for head-pose estimation on the basis of feature fusion. Gabor features are robust and invariant to differences in orientation and illuminance but are not sufficient to express the amplitude character in images. By contrast, phase congruency (PC) functions work well in amplitude expression. Both illuminance and amplitude vary over distinctive regions. Here, we employ entropy information to evaluate orientation and amplitude to execute feature fusion. More specifically, entropy is used to represent the randomness and content of information. For the first time, we seek to utilize entropy as weight information to fuse the Gabor and phase matrices in every region. The proposed EWGP feature matrix was verified on Pointing’04 and FacePix. The experimental results demonstrate that our method is superior to the state of the art in terms of MSE, MAE, and time cost.
Highlights
Visual focus of attention (VFoA) is emphasized to estimate at what or whom a person is looking and is highly correlated with head-pose estimation [1]
Head poses convey an abundance of information in natural interpersonal communication (NIC) and human-computer interaction (HCI) [2]; an increasing number of researchers is seeking more effective and robust methodologies to estimate head pose
Head poses play a critical role in artificial intelligence (AI) applications and reveal considerable latent significance of personal intent
Summary
1.1 Introduction Visual focus of attention (VFoA) is emphasized to estimate at what or whom a person is looking and is highly correlated with head-pose estimation [1]. We employ an elliptical skin model in a non-linearly transformed YCbCr color space, which was proposed in [43] This algorithm detects face regions in probe images with good performance, and its core operation is described in Eqs. The Pointing’ head-pose dataset was utilized to evaluate the phase-congruency features after face detection by the eclipse skin model To this end, binary-edge images were collected. These experimental results indicate that the proposed EWGP representation is suitable for head-pose estimation in the yaw and pitch directions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.