Abstract

Iris localisation and segmentation are challenging and critical tasks in iris biometric recognition. Especially in non-cooperative and less ideal environments, their impact on overall system performance has been identified as a major issue. In order to avoid a propagation of system errors along the processing chain, this paper investigates iris fusion at segmentation-level prior to feature extraction and presents a framework for this task. A novel intelligent reference method for iris segmentation-level fusion is presented, which uses a learning-based approach predicting ground truth segmentation performance from quality indicators and model-based fusion to create combined boundaries. The new technique is analysed with regard to its capability to combine segmentation results (pupillary and limbic boundaries) of multiple segmentation algorithms. Results are validated on pairwise combinations of four open source iris segmentation algorithms with regard to the public CASIA and IITD iris databases illustrating the high versatility of the proposed method.

Highlights

  • Personal recognition from human iris images comprises several steps: image capture, eye detection, iris localisation, boundary detection, eyelid and noise masking, normalisation, feature extraction, and feature comparison [1]

  • The contributions of this paper are as follows: (1) a generalised fusion framework for combining iris segmentation results extending [5] towards including qualitybased predictors of segmentation performance guiding the selection of contributing information; (2) a reference implementation based on neural networks and augmented model-based combination of segmentation evidence using iris mask post-processing; and (3) an evaluation of proposed methods analysing pairwise combinations of

  • Min-max dyadic wavelet 1-D Gabor phase quantisation segmentation algorithms: contrast-adjusted Hough transform (CAHT), weighted adaptive Hough and ellipsopolar transforms (WAHET), iterative Fourier-based pulling and pushing (IFPP), and the open source iris recognition toolkit (OSIRIS) as representatives for elliptic, circular, and free-form iris segmentation models

Read more

Summary

Introduction

Personal recognition from human iris (eye) images comprises several steps: image capture, eye detection, iris localisation, boundary detection, eyelid and noise masking, normalisation, feature extraction, and feature comparison [1]. Given the circular (elliptic, respectively, for out-of-axis acquisitions) shape of the iris, the ultimate outcome needed for iris normalisation is a parameterisation of inner and outer iris boundaries P, L :[ 0, 2π) → [ 0, m] ×[ 0, n] enclosing non-zero values (iris pixels) in N (ignoring noise and occlusions to avoid non-linear distortions [7]). Using these boundaries, the iris texture is mapped into a coordinate system spanning angle θ and pupil-to-limbic radial distance r [8]. Advantages of algorithms are combined effectively; in this case, some segmentation results have to be rejected, based on the accuracy of the segmentation

Segmentation accuracy
Pruning phase
Methods
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call