Abstract
Getting to an ear recognition model that can overcome all challenges and difficulties was and still the main objective of researchers for years. One particular problem we highlight here, which is the loss of color information during the test phase, in other words, feeding grayscale, mono-color or dark test images to a model that is trained with colored images. In this paper, we propose a framework that involves conditional Deep Convolutional Generative Adversarial Networks (DCGAN), and Convolutional Neural Network (CNN) models. The proposed framework consists of a generative model that is responsible for colorizing grayscale and dark images, followed by a classification model. The performance of the proposed framework has been evaluated using the constrained AMI and the unconstrained AWE ear datasets. Performance metrics have been measured under three experimental scenarios, the obtained results highlighted the significant negative impact of the absence of color information and proved the vital role of our framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.