Abstract
Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris segmentation models occupy more space and take a long time. Therefore, a lightweight, precise, and fast segmentation network model, PFSegIris, aimed at the multi-source heterogeneous iris is proposed by us. First, the iris feature extraction modules designed were used to fully extract heterogeneous iris feature information, reducing the number of parameters, computation, and the loss of information. Then, an efficient parallel attention mechanism was introduced only once between the encoder and the decoder to capture semantic information, suppress noise interference, and enhance the discriminability of iris region pixels. Finally, we added a skip connection from low-level features to catch more detailed information. Experiments on four near-infrared datasets and three visible datasets show that the segmentation precision is better than that of existing algorithms, and the number of parameters and storage space are only 1.86 M and 0.007 GB, respectively. The average prediction time is less than 0.10 s. The proposed algorithm can segment multi-source heterogeneous iris images more precisely and quicker than other algorithms.
Highlights
Iris segmentation [5] is the accurate location of the iris region in the whole image, which plays a decisive role in the subsequent iris feature expression and recognition rate and is an important step in the entire iris recognition process
The current deep-learning-based iris segmentation network models incur a high cost in terms of a large parameter search space and a long segmentation time, have requirements for hardware devices, and perform poorly in multi-source heterogeneous iris segmentation. Targeting these problems and motivated by the above observations, we propose a precise and fast segmentation network model, PFSegIris, for multi-source heterogeneous iris images that can accurately segment iris regions of different sizes; weaken the influence of different spectra and eyelid, eyelash occlusion noises; and enhance the discriminative ability of iris region pixels, thereby, having a better universality for iris images collected by different devices
Different from traditional methods and other iris segmentation algorithms based on deep learning, a more precise segmentation algorithm, PFSegIris, was designed to segment multi-source heterogeneous irises without any preprocessing or postprocessing
Summary
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. The current deep-learning-based iris segmentation network models incur a high cost in terms of a large parameter search space and a long segmentation time, have requirements for hardware devices, and perform poorly in multi-source heterogeneous iris segmentation Targeting these problems and motivated by the above observations, we propose a precise and fast segmentation network model, PFSegIris, for multi-source heterogeneous iris images that can accurately segment iris regions of different sizes; weaken the influence of different spectra and eyelid, eyelash occlusion noises; and enhance the discriminative ability of iris region pixels, thereby, having a better universality for iris images collected by different devices. Our main contributions can be summarized as follows: Different from traditional methods and other iris segmentation algorithms based on deep learning, a more precise segmentation algorithm, PFSegIris, was designed to segment multi-source heterogeneous irises without any preprocessing or postprocessing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.