Abstract
The advancement of face recognition technologies has been pivotal in various applications, from security systems to personalized user experiences. There are significant efforts already devoted to solving challenges of multimodality and pose variation in face recognition. Some studies focus on multimodality but pose-invariant, and other studies focus on pose variation but single modality. Despite significant progress, various face recognition algorithms do not consider both multimodality and pose variation constraints in their proposed methods. Recognizing face images presented both in a different modality and in a different pose presents serious challenges to current algorithms. This paper proposes an algorithm that combines the strengths of deep learning with decision trees to improve face recognition performance across modalities and poses in constrained and unconstrained environments. This hybrid approach leverages the representational power of deep learning and the interpretability and simplicity of decision trees. The findings indicate significant improvements over existing methodologies, particularly in challenging conditions like when multimodality and pose variation constraints are compounded together in the input face images in both constrained and unconstrained environments. The proposed algorithm not only addresses the limitations of current face recognition systems but also offers scalable, efficient solutions suitable for real-world applications.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.