Abstract

Automated human authentication is becoming increasingly popular on a variety of daily activities, ranging from surveillance to commercial related applications. While there are many biometric modalities that can be used, ear recognition has earned its value if and when available to be captured. Ears demonstrate specific advantages over other competitors in an effort to identify cooperative and non-cooperative individuals in either controlled or challenging environments. The performance of ear recognition systems can be impacted by several factors, including standoff distance, ear pose angle, and ear image quality. While all three factors can degrade ear recognition performance, here we focus on the latter two using real data, and assess the standoff distance factor by synthetically generating blurry and noisy images to simulate longer distance ear images. Thus, in this work we are inspired by various studies in the literature that discuss the how and why challenging biometric images of different modalities impact the associated biometric system recognition. Specifically, we focus on how different ear image distortions and yaw pose angles affect the performance of various deep learning based ear recognition models. Our contributions are threefold. Firstly, we are using challenging ear dataset, with a wide range of yaw pose angles, to evaluate the ear recognition performance of various original ear matching approaches. Secondly, by examining multiple convolutional neural network (CNN) architectures and employing multiple techniques for the learning process, we determine the most efficient CNN - based ear recognition approach. Thirdly, we investigated the impact on performance of a set of ear recognition CNN models in the presence of multiple image degradation factors, including variations of blurriness, additive noise, brightness and contrast.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call