<p>In the past, real-world photos have been used to train classifiers for face liveness identification since the related face presentation attacks (PA) and real-world images have a high degree of overlap. The use of deep convolutional neural networks (CNN) and real-world face photos together to identify the liveness of a face, however, has received very little study. A face recognition system should be able to identify real faces as well as efforts at faking utilizing printed or digital presentations. A true spoofing avoidance method involves observing facial liveness, such as eye blinking and lip movement. However, this strategy is rendered useless when defending against replay assaults that use video. The anti-spoofing technique consists of two modules: the ConvNet classifier module and the blinking eye module, which measure lip and eye movement. The results of the testing demonstrate that the developed module is capable of identifying various face spoof assaults, including those made with the use of posters, masks, or smartphones. To assess the convolutional features in this study adaptively fused from deep CNN produced face pictures and convolutional layers learned from real-world identification. Extensive tests using intra-database and cross-database scenarios on cutting-edge face anti-spoofing databases including CASIA, OULU, NUAA and replay-attack dataset demonstrate that the proposed solution methods for face liveness detection. The algorithm has a 94.30% accuracy rate.</p>
Read full abstract