Abstract

Face occlusion has been a long standing challenging issue in face recognition. In the state-of-the-art deep Convolutional Neural Network (CNN) face recognition models, occluded facial parts are generally embedded into the learned features together with the non-occluded parts in an equivalent manner. As such, the discriminative power of the generated face representation may be weakened for occluded face images. To address this problem, we propose the MaskNet, a trainable module which can be included in existing CNN architectures. With end-to-end training supervised by only the personal identity labels, MaskNet learns a proper way of adaptively generating different feature map masks for different occluded face images. Intuitively, MaskNet automatically assigns higher weights to the hidden units activated by the non-occluded facial parts and lower weights to those that are activated by the occluded facial parts. Experiments on datasets consisting of real-life and synthetic occluded faces demonstrate that MaskNet can effectively improve the robustness of CNN models towards occlusions in face recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call