Abstract

Due to the problem of insufficient dynamic human ear data, the Changchun University dynamic human ear (CCU-DE) database, which is a small sample human ear database, was developed in this study. The database fully considers the various complex situations and posture changes of human ear images, such as translation angle, rotation angle, illumination change, occlusion and interference, etc., making the research of dynamic human ear recognition closer to complex real-life situations, and increasing the applicability of human ear dynamic recognition. In order to test the practicability and effectiveness of the developed CCU-DE small sample database, we designed a dynamic human ear recognition system block diagram based on a deep learning model, which was pre-trained by a migration learning method. Aiming at multi-posture changes under different contrasts, translation and rotation motions, and with or without occlusion, simulation studies were conducted using the CCU-DE small sample database and different deep learning models, such as YOLOv3, YOLOv4, YOLOv5, Faster R-CNN, and SSD. The experimental results showed that the CCU-DE database can be well used for dynamic ear recognition, and it can be tested by using different deep learning models with higher test accuracy.

Highlights

  • Academic Editor: Andreas SavakisWith the acceleration of the informatization process, the fields of information security, social security, national security, and financial transaction security have increasingly higher requirements for the accuracy of personal identity verification

  • We set up two hypotheses: (1) dynamic human ear recognition is affected by contrast, posture changes, illumination change, the size of translation angle or rotation angle, occlusion and interference; (2) different deep learning models such as the YOLO series (YOLOv3 [16], YOLOv4 [17] and YOLOv5 [18]), Faster R-CNN and SSD can be used to test the effectiveness of the dynamic human ear database

  • The main contributions of this paper are as follows: (1) we developed a dynamic human ear database named CCU-DE; (2) we designed a dynamic human ear recognition system block diagram based on a deep learning model which was pre-trained by adopting the migration learning method; (3) we used various deep learning models for ear recognition to test the dynamic database CCU-DE

Read more

Summary

Introduction

With the acceleration of the informatization process, the fields of information security, social security, national security, and financial transaction security have increasingly higher requirements for the accuracy of personal identity verification. Since the recognition results of human ears are affected by changes in environment and posture, methods based on deep learning have more advantages. We mainly studied dynamic human ear recognition based on deep learning. We set up two hypotheses: (1) dynamic human ear recognition is affected by contrast, posture changes, illumination change, the size of translation angle or rotation angle, occlusion and interference; (2) different deep learning models such as the YOLO series (YOLOv3 [16], YOLOv4 [17] and YOLOv5 [18]), Faster R-CNN and SSD can be used to test the effectiveness of the dynamic human ear database.

Dynamic Human Ear Recognition System Block Diagram
Development of CCU-DE Small Sample Database
Acquisition
Eardata1
Eardata2
Eardata3
Eardata5
Participants
Design
Experimental Setting
Determination of the Initial Value of the Model
The Effect of Epoch Value on the Training Model
The Training Data Experiment Results
Contrast Experiment of Each Angle Posture of Translational Motion
Rotational Motion Comparison Experiment
Comparison Experiment with and without Occlusion
Comparative Experiment of CCU-DE Datasets and Different Deep Learning Models
Conclusions
Section 4.6
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.