Abstract

The demand for applying the iris segmentation model on mobile devices has been growing rapidly. Most current segmentation networks have an enormous amount of parameters, hence unsuitable for mobile devices, while other small memory footprint models follow the spirit of classification networks and ignore the inherent characteristic of segmentation. To address the challenge, we propose a lightweight segmentation network (LiSeNet) for iris segmentation of noisy images. Unlike previous studies that only focus on improving the accuracy of segmentation masks, LiSeNet can simultaneously obtain segmentation masks, parameterized pupillary and limbic boundaries of the iris, further enabling CNN-based iris segmentation to be applied in any regular iris recognition systems. We first propose a multiscale concatenate (MSC) Block, which connects multiple sizes of convolution kernels in a dense manner, gradually reduces the dimension of feature maps and uses the aggregation of them for image representation. Based on the MSC block, we develop a two-stage refinement encoder to aggregate discriminative features through subnetwork feature reuse and substage feature reassess, thus obtaining a sufficient receptive field and enhancing the model learning ability. To exploit object contextual information more efficiently, we further devise a grouped spatial attention to emphasize the important features and suppress irrelevant noises through a gating mechanism in the decoder. Extensive experiments on three challenging iris datasets show that LiSeNet, without any complicated postprocessing, achieves competitive or state-of-the-art performance with only 2.2M parameters, being 14 × smaller than the previous best method. Code will be publicly available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call