Abstract

In the era of COVID-19, past face recognition algorithms' performance is downgraded due to the partial occlusion of face mask. A new Indian face image dataset has been proposed titled, Handcrafted Indian Face (HIF) dataset, addressing the issues, viz. variegated illumination, pose, and partial occlusion conditions. It bridges the gap between the performance of the DL models used, pre and post COVID-19 effect. A novel idea for choosing the train-test sample has been presented in the paper, which improves the accuracy on existing state of art DL models. In this paper, a new DL architecture has been proposed named the InceptBlock Enhanced Attention Fusion Network (IBEAFNet) which consists of the combination of ECBAM (Enhanced Convolution Block Attention Module) and InceptionV3 architecture. The proposed architecture's attention layer placement allows it to suppress less relevant mask regions of face, while emphasizing on significant fine and coarse level features with reduced complexity. IBEAFNet is trained and tested on two existing datasets, viz. Casia & Yale (including simulated masked images) and the proposed HIF dataset. The performance of IBEAFNet is compared with the results fetched by changing the attention layers in IBEAFNet with the blocks of SENet and CBAM. IBEAFNet outperformed the state-of-art models with the accuracy of 91.00%, 89.5%, and 93.00% on CASIA, Yale, and HIF dataset, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call