Iris segmentation is a critical step in the iris recognition system. Since the quality of iris database taken under different camera sensors varies greatly, thus most existing iris segmentation methods are designed for a particular collection device. Meanwhile, many iris segmentation methods based on convolutional neural networks (CNNs) require a lot of computational costs and hardware costs (storage space), which are not suitable for deploying on low-performance devices. To address the above problems, an accurate and efficient heterogeneous iris segmentation network is proposed in this paper. First, the authors design an efficient feature extraction network, which combines depth-wise separable convolution with traditional convolution to greatly reduce model parameters and computational cost while maintaining segmentation accuracy. Then, a Multi-scale Context Information Extraction Module (MCIEM) is proposed to extract multi-scale spatial information at a more granular level and enhance the discriminability of the iris region. Finally, a Multi-layer Feature Information Fusion Module (MFIFM) is proposed to reduce the loss of information during the downsampling process. Experimental results on multi-source heterogeneous iris database show that the proposed network can not only achieve state-of-the-art performance but is also more efficient in terms of required parameters, calculated load, and storage space.
Read full abstract