Abstract

Feature description is an important step in image registration work flow. Discriminative power of feature descriptors affects feature matching performance and overall results of image registration. Deep Neural Network-based (DNN) feature descriptors are emerging trend in image registration tasks, often performing equally or better than hand-crafted ones. However, there are no learned local feature descriptors, specifically trained for human retinal image registration. In this paper we propose DNN-based feature descriptor that was trained on retinal image patches and compare it to well-known hand-crafted feature descriptors. Training dataset of image patches was compiled from nine online datasets of eye fundus images. Learned feature descriptor was compared to other descriptors using Fundus Image Registration dataset (FIRE), measuring amount of correctly matched ground truth points (Rank-1 metric) after feature description. We compare the performance of various feature descriptors applied for retinal image feature matching.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.