Abstract
Iris segmentation is a critical step in the iris recognition system. Since the quality of iris database taken under different camera sensors varies greatly, thus most existing iris segmentation methods are designed for a particular collection device. Meanwhile, many iris segmentation methods based on convolutional neural networks (CNNs) require a lot of computational costs and hardware costs (storage space), which are not suitable for deploying on low-performance devices. To address the above problems, an accurate and efficient heterogeneous iris segmentation network is proposed in this paper. First, the authors design an efficient feature extraction network, which combines depth-wise separable convolution with traditional convolution to greatly reduce model parameters and computational cost while maintaining segmentation accuracy. Then, a Multi-scale Context Information Extraction Module (MCIEM) is proposed to extract multi-scale spatial information at a more granular level and enhance the discriminability of the iris region. Finally, a Multi-layer Feature Information Fusion Module (MFIFM) is proposed to reduce the loss of information during the downsampling process. Experimental results on multi-source heterogeneous iris database show that the proposed network can not only achieve state-of-the-art performance but is also more efficient in terms of required parameters, calculated load, and storage space.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.