Abstract
This paper discusses the issues of training machine learning models for people authentication by the face image in longwave infrared radiation. The analysis of public available and freely distributed datasets of other researchers in the field, which contain images of people’s faces in infrared range, are discussed. It was concluded, that it is necessary to form a dataset for training and testing machine learning models on one’s own. To obtain the highest results using trained methods in practical purposes a list of requirements for the collected dataset was developed. In conclusion, an experiment was conducted, where the machine learning method (logistic regression) was trained on each dataset and tested on a sample of 15 images combining each dataset (75 images totally). During the experiment it was discovered, that the algorithm trained on the collected dataset successfully copes with both: the task of authenticating user by his thermal imaging on theoretical datasets and in the practical one. The results obtained can be used in access control and management systems to increase the fault tolerance characteristics for person authentication. The use of this dataset allows the model training aimed at theoretical and practical use in the tasks of processing thermograms, for the person authentication by the pattern of veins and vascular networks on the face in cases of changes in appearance and environmental conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Vestnik komp'iuternykh i informatsionnykh tekhnologii
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.