Abstract

Iris recognition is now developed enough to recognize a person from a distance. The process of iris segmentation plays a vital role in maintaining the accuracy of the iris-based recognition systems by limiting the errors at the current stage. However, its performance is affected by non-ideal situations created by environmental light noise and user non-cooperation. The existing local feature-based segmentation methods are unable to find the true iris boundary in these non-ideal situations, and the error created at the segmentation stage traverses to all the subsequent stages, which results in reduced accuracy and reliability. In addition, it is necessary to segment the true iris boundary without the extra cost of denoising as preprocessing. To overcome these challenging issues during iris segmentation, a deep learning-based fully residual encoder–decoder network (FRED-Net) is proposed to determine the true iris region with the flow of high-frequency information from the preceding layers via residual skip connection.The main four impacts and significances of this study are as follows. First, FRED-Net is an end-to-end semantic segmentation network that does not use conventional image processing schemes, and does not have a preprocessing overhead. It is a standalone network in which eyelid, eyelash, and glint detections are not required to obtain the true iris boundary. Second, the proposed FRED-Net is the final resultant structure of a step-by-step development, and in each step, a new complete variant network is created for semantic segmentation considering the detailed descriptions of the networks. Third, FRED-Net uses the residual connectivity between convolutional layers by the residual shortcut for both encoder and decoder, which enables a high-frequency component to flow through the network and achieve higher accuracy with few layers. Fourth, the performance of the proposed FRED-Net is tested with five different iris datasets under visible and NIR light environments, and two general road scene segmentation datasets. To achieve fair comparisons with other studies, our trained FRED-Net models, along with the algorithms, are made publicly available through our website (Dongguk FRED-Net Model with Algorithm. accessed on 16 May 2018).The experiments include two datasets: Noisy Iris Challenge Evaluation – Part II (NICE-II) selected from the UBIRIS.v2 database and Mobile Iris Challenge Evaluation (MICHE-I), for the visible light environment and three datasets: Institute of Automation, Chinese Academy of Sciences (CASIA) v4.0 interval, v4.0 distance, and IIT Delhi v1.0, for the near-infrared (NIR) light environment. Moreover, to evaluate the performance of the proposed network in general segmentation, experiments with two famous road scene segmentation datasets: Cambridge-driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI), are included. The experimental results showed the optimum performance of the proposed FRED-Net on the above-mentioned seven datasets of iris and general road scene segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call