Abstract

Pre-trained DCNN trained on a large-scale image database can be used as a universal feature representation for image classification, which has achieved significant progress in some image recognition tasks. Compared with other image recognition tasks, directly utilizing a single convolutional feature as feature representation for vein recognition task cannot achieve the impressive result due to the sparse distribution of vein information. Therefore, to obtain more representative and discriminative convolutional feature for vein recognition, a novel multi-layer convolutional features' concatenation with semantic feature selector is proposed in this paper. In the pre-trained DCNN, different convolutional layers can encode different-level feature information. High-level convolutional features with vein information cover more semantic information and low-level convolutional features with vein information cover more detail information. However, low-level convolutional features also contain some background information. Therefore, in order to remove the background information of low-level convolutional features, a novel semantic feature selector is presented. First, the proposed local max-pooling of preserving spatial position (LMP-PSP) information is applied on activation map obtained by adding up all feature maps of the high-level convolutional layer to generate the semantic weighting map, which reflects key vein information of high-level convolutional features. Then, semantic weighting map is regarded as a feature selector to discard the background information of the low-level convolutional features and preserve the detail information of low level convolutional features. Finally, low-level convolutional features with vein information are selectively linked to high-level convolutional features with vein information based on the proposed semantic feature selector. A series of rigorous experiments on two lab-made vein databases named CUMT-Hand-Dorsa Vein database and CUMT-Palm Vein database is conducted to verify the effectiveness and feasibility of the proposed model. Besides, additional experiments with PUT Palm Vein database and the subset of PolyU database illustrate its generalization ability and robustness.

Highlights

  • With the rapid development of society and scientific technology, the protection of personal information security plays a more and more important role in people’s life

  • The weighting map based on semantic information, which represents key vein information of high-level convolutional features, is produced by applying the proposed local max-pooling of preserving spatial position information (LMP-SPS) on activation map generated by adding up all feature maps of the high-level convolutional layer

  • We adopt the weighting map based on semantic information as a feature selector to select the useful detail information in low-level convolutional features

Read more

Summary

INTRODUCTION

With the rapid development of society and scientific technology, the protection of personal information security plays a more and more important role in people’s life. Liu et al [9] present a novel cross-convolutional-layer pooling based on convolutional activation of pre-trained DCNN to better obtain discriminative deep representation for image recognition. A novel multi-layer convolutional features concatenation with semantic feature selector is proposed to obtain more discriminative and richer convolutional feature for hand-dorsa vein recognition. The outstanding recognition results of series rigorous comparison experiments on four databases including two lab-made databases and two public databases can demonstrate the effectiveness and robustness of the proposed multi-layer convolutional features concatenation with semantic feature selector for vein recognition system. A novel semantic feature selector is proposed to connect low-level convolutional features and high-level convolutional features for obtaining more discriminative and richer deep convolutional features.

RELATED WORK
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call