This article proposed a multimodal biometric enrolment and authentication system (MBEAS) with modified score-level fusion and TriBlendNN-based template matching approach. It consists of four phases: (i) enrolment phase, (ii) security phase, (iii) storage phase, and (iv) authentication phase. Initially, the raw data of the iris, face, hand, speech, signature, handwriting, fingerprint, and keystroking are collected from the BiosecurID database, and then the raw images are pre-processed via resizing and cropping. Raw signal of speech is pre-processed via wavelet denoising and spectral subtraction. The raw data of key stroking is pre-processed via Z-score normalization. Then the pre-processed images of iris, face, signature, hand, handwriting, and fingerprints are segmented via optimized watershed segmentation. The pre-processed speech signal is segmented via VAD. From the segmented data, the optimal features are extracted for iris (LBP), face (IncepV3), signature and handwriting (shape features and GLCM), speech (MFCC), fingerprints (minutiae extraction), hand (palm print features), and keystroking. From the feature-extracted data, the feature fusion is then performed via modified score-level fusion. After the enrolment phase, the feature fused data is secured using watermarking. Then the watermarked data is stored in cloud storage. The final stage is the authentication stage, wherein the template matching is processed via the newly proposed TriBlendNN model. The proposed TriBlendNN model is a combination of the CNN, RNN, and Bi-LSTM. The final outcome comes from template matching. The proposed model is implemented in Python and its accuracy at learning rates of 70% and 80% are 96.89% and 97.76%, respectively.
Read full abstract