Abstract

Due to their unique and measurable properties, biometric security systems are more reliable and secure than traditional ones. However, unimodal biometric systems suffer from various problems such as spoof attacks, non-universality, intra-class variances, inter-class similarities, and noisy data. To overcome these problems, multimodal biometric systems which utilize more trait features have emerged to efficiently authenticate the identity of the individuals in various real-world applications. Along the same line, this paper proposes a multimodal biometric system for human recognition based on deep features fusion of electrocardiograms (ECG) signals and ear images. The proposed system is hard to spoof compared to current systems as the ear biometric provides a fixed structure over an acceptable period of human life, and the ECG offers the characteristic of the person's liveness. It also applies a transfer-learning methodology to extract discriminative deep features by exploiting a pre-trained VGG-m Net model. Furthermore, to improve the efficiency of the proposed model’s training, augmentation techniques were utilized to further increase the size of the training data. A course of experiments has been conducted to assess the performance of the proposed approach for unimodal and multimodal biometric traits. The experimental results reveal that the proposed system achieves promising results and outperforms the unimodal of ECG and ear, and other state-of-the-art multimodal biometric systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call