Abstract
Abstract: Palm print recognition techniques have made extensive and successful use of palm print direction patterns. In order to obtain authentic line responses in the palm print image, the majority of current direction-based techniques uses pre-defined filters, which necessitate extensive prior knowledge and typically overlooks crucial direction information. Furthermore, some noise-influenced line replies will reduce the accuracy of recognition. Another challenge for enhancing recognition performance is figuring out how to extract the discriminative elements that will make the palm print easier to distinguish. We suggest using this work to understand comprehensive and discriminative direction patterns in order to address these issues. Using the palm print picture, we first extract the complete and salient local direction patterns, which include a salient convolution difference feature (SCDF) and a complete local direction feature (CLDF). Two learning models are then suggested in order to obtain the underlying structure for the SCDFs in the training samples and to learn discriminative and sparse directions from CLDF, respectively. Finally, the full and discriminative direction feature for palm print recognition is formed by concatenating the projected CLDF and the projected SCDF. The efficiency of the suggested strategy is amply demonstrated by experimental findings on three noisy datasets and seven palm print databases.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have