Abstract

This work studies the intra-bimodal face-based biometric fusion approach composed of the thermal and spatial domains. The distinctive feature of this work is the use of a single camera with two sensors which returns a unique image with thermal and visual images at a time, as opposed to the-state-of-the-art, for example the multibiometric modalities and hyperspectral images. The proposed system represents a practical bimodal approach for real applications. It is composed by a verification architecture based on the Scale-Invariant Feature Transform algorithm (SIFT) with a vocabulary tree, providing a scheme that scales efficiently to a large number of features. The image database consists of front-view thermal and visual image as a single image, which contain facial temperature distributions of 41 different individuals in 2-dimensional format and 18 images per subject, acquired on three different-day sessions. Results showed that visible images gives better accuracy than thermal information, and with independency of range, head images give the most discriminative information. Besides, fusion approaches reached better accuracy, up to 99.45% for score fusion and 100% for decision fusion. This shows the independency of information between visual and thermal images and the robustness of bimodal interaction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.