Abstract

Abstract Biometric systems depending on ‘one-modal biometrics’ do not meet up with the required performance necessities for huge user appliances, owing to certain issues like ‘noisy data, intra-class variations, restricted degrees of freedom, spoof attacks and unacceptable error rates’. This work tends to discover a multimodal biometric recognition (MBR) model that includes three main phases like ‘(i) pre-processing, (ii) segmentation, (iii) feature extraction and (iv) classification’. Initially, the images are pre-processed and those pre-processed images are subjected to segmentation. In this context, segmentation is carried out using the Otsu thresholding model. The segmented images are then subjected to a feature extraction process. This work exploits local feature extraction, where ‘Gabor filter features, Zernibe moment features and proposed local binary pattern features’ are extracted. Subsequently, the fusion framework is developed, which has enhanced classification abilities with minimal dimension for MBR. As the next process, recognition takes place by the optimized neural network (NN) model. As a novelty, the training of NN is carried out using a new modified dragonfly algorithm by selecting the optimal weight. Finally, analysis is carried out for validating the betterment of the presented model in terms of different measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call