Abstract

With the proliferation of mobile devices and the availability of network bandwidth, more services are offered online, it becomes imperative that users to be authenticated before providing the services to them. Most of the current applications use either a combination of username and password, or one time token, or passcode to verify the identity of a user. With heavy emphasis on authentication to access various services and applications, using individual biometrics as a method of verification is gaining popularity due their unique nature as well as their unforgeability. Biometric authentication is successfully employed in a number of different applications, such as frequent flyer programs, criminals identification, border security and airport passenger screening, to name a few. In all of the above applications, the environment in which the biometric is captured is controlled, hence the quality of the captured image is guaranteed. However, utilization of biometric authentication by a casual user poses additional problems. The primary ones lie in (i) the devices that are necessary to capture a user's biometric and transmit to the application server; (ii) the variability in the capturing environments which introduces unpredictable errors to the captured image; and (iii) the impracticality to achieve zero error rate unlike the techniques that use password or PINs-based authentication. These issues are preventing the deployment of biometrics as means of authentication and identification. With the popularity of smartphones with their built-in cameras, it is now possible for users to capture their biometric image and send it to the applications to be authenticated. However, errors in the capturing environments needs to be identified and corrected before the matching process. Unfortunately, one may not know in advance the kind of errors that will occur, thus any pre-processing strategies cannot be applied to remove all of those unknown errors. On the contrary, using only error-free partial information of a biometric will reduce the useful features that can be used in the matching process for identification. This thesis addresses the methods to overcome the above limitations by proposing post-processing techniques that can improve the confidence in biometric matching. This is achieved by incorporating strategies that can compensate the errors that are being introduced due to variability in the capturing environments, proposing a metric of similarity that can confidently decide whether a query biometric belongs to the genuine person or not, and using as much information as possible from the captured biometric. The major contributions of the thesis are: First, the errors that are being introduced in the capturing environments is modelled as a noisy communication channel, then uses the principle of error correcting codes to transform the query biometric so as to be closer to the user's registered biometric. Second, in order to improve the confidence level in the matching and identification process, a modified similarity metric in the form of the length of the common substring is used. This metric aims to give maximum and minimum similarity scores for the genuine user and impostor matching respectively. Third, a non-uniform matching strategy to further improve the confidence in the matching process is presented. In reality, certain users tend to produce high degree of errors and create overlap. Hence, we identify these users and design the error correcting codes to tackle these errors rather than removing the biometrics of such users. Finally, the performance of the proposed techniques is evaluated quantitatively by running experiments on a number of iris datasets and the results are then compared with other known techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call