Abstract
In this paper, we propose an adaptive face and ear based bimodal recognition framework using sparse coding, namely ABSRC, which can effectively reduce the adverse effect of degraded modality. A unified and reliable biometric quality measure based on sparse coding is presented for both face and ear, which relies on the collaborative representation by all classes. For adaptive feature fusion, a flexible piecewise function is carefully designed to select feature weights based on their qualities. ABSRC utilizes a two-phase sparse coding strategy. At first, face and ear features are separately coded on their associated dictionaries for individual quality assessments. Secondly, the weighted features are concatenated to form a unique feature vector, which is then coded and classified in multimodal feature space. Experiments demonstrate that ABSRC achieves quite encouraging robustness against image degeneration, and outperforms many up-to-date methods. Very impressively, even when query sample of one modality is extremely degraded by random pixel corruption, illumination variation, etc., ABSRC can still get performance comparable to the unimodal recognition based on the other modality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.