Abstract

AbstractTo overcome the drawbacks encountoured in unimodal biometric systems for person authentification, multimodal biometrics methods are needed. This paper presents an efficient feature level fusion of iris and ear images using SIFT descriptors which extract the iris and ear features separetely. Then, these features are incorporated in a single feature vector called fused template. The generated template is enrolled in the database, then the matching of SIFT features of iris and ear input images and the enrolled template of the claiming user is computed using Euclidean distance. The proposed method has been applied on a synthetic multimodal biometrics database. The latter is produced from Casia and USTB 2 databases wich represent iris and ear image sets respectively. As the performance evaluation of the proposed method we compute the false rejection rate (FRR), the false acceptance rate (FAR) and accuracy measures. From the obtained results, we can say that the fusion at feature level outperforms iris and ear authentification systems taken separately.KeywordsMultimodal biometricsFeature level fusionSIFTIris biometricsEar biometrics

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call