Abstract
Iris detection and tracking plays a vital role in human–computer interaction and has become an emerging field for researchers in the last two decades. Typical applications such as virtual reality, augmented reality, gaze detection for customer behavior, controlling computers, and handheld embedded devices need accurate and precise detection of iris landmarks. A significant improvement has been made so far in iris detection and tracking. However, iris landmarks detection in real-time with high accuracy is still a challenge and a computationally expensive task. This is also accompanied with the lack of a publicly available dataset of annotated iris landmarks. This article presents a benchmark dataset and a robust framework for the localization of key landmark points to extract the iris with better accuracy. A number of training sessions have been conducted for MobileNetV2, ResNet50, VGG16, and VGG19 over an iris landmarks dataset, and ImageNet weights are used for model initialization. The Mean Absolute Error (MAE), model loss, and model size are measured to evaluate and validate the proposed model. Results analyses show that the proposed model outperforms other methods on selected parameters. The MAEs of MobileNetV2, ResNet50, VGG16, and VGG19 are 0.60, 0.33, 0.35, and 0.34; the average decrease in size is 60%, and the average reduction in response time is 75% compared to the other models. We collected the images of eyes and annotated them with the help of the proposed algorithm. The generated dataset has been made publicly available for research purposes. The contribution of this research is a model with a more diminutive size and the real-time and accurate prediction of iris landmarks, along with the provided dataset of iris landmark annotations.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.