Abstract

ABSTRACT Multi-modal biometric refers to the use of various biometric indicators for individual identification by personal recognition systems. When compared to unimodal biometrics, which uses only one biometric data, such as a fingerprint, face, palm print or iris, multi-modal authentication offers a higher degree of authentication. A new optimal score value that fuses deep learning and multi-modal biometrics would be produced in the project study. A proposed approach was split into three main groups: feature extraction, pre-processing and ensemble recognition. First, median filtering and ROI procedures were utilised for pre-processing-captured original biometric information for the wrist, dorsal, palm vein and palm print. Pertinent features are then retrieved from the corresponding preprocessed images for every modality. These extracted features are subjected to an imposter or genuine determination. Neural Network 1, Neural Network 2 and Deep Convolution Neural Network create a new deep ensemble model in the event of a forgery or accurate estimation (DCNN). The outputs of NN1 and NN2 are the inputs to DCNN, which provides information on whether the biometric data are authentic or not. Finally, the results are fine-tuned by the weight of DCNN utilising new hybrid optimisation scheme referred as Butterfly combined Tunicate Swarm Optimisation (BTSA).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.