Abstract
BackgroundA prerequisite for many eye tracking and video-oculography (VOG) methods is an accurate localization of the pupil. Several existing techniques face challenges in images with artifacts and under naturalistic low-light conditions, e.g. with highly dilated pupils. New methodFor the first time, we propose to use a fully convolutional neural network (FCNN) for segmentation of the whole pupil area, trained on 3946 VOG images hand-annotated at our institute. We integrate the FCNN into DeepVOG, along with an established method for gaze estimation from elliptical pupil contours, which we improve upon by considering our FCNN's segmentation confidence measure. ResultsThe FCNN output simultaneously enables us to perform pupil center localization, elliptical contour estimation and blink detection, all with a single network and with an assigned confidence value, at framerates above 130 Hz on commercial workstations with GPU acceleration. Pupil centre coordinates can be estimated with a median accuracy of around 1.0 pixel, and gaze estimation is accurate to within 0.5 degrees. The FCNN is able to robustly segment the pupil in a wide array of datasets that were not used for training. Comparison with existing methodsWe validate our method against gold standard eye images that were artificially rendered, as well as hand-annotated VOG data from a gold-standard clinical system (EyeSeeCam) at our institute. ConclusionsOur proposed FCNN-based pupil segmentation framework is accurate, robust and generalizes well to new VOG datasets. We provide our code and pre-trained FCNN model open-source and for free under www.github.com/pydsgz/DeepVOG.
Highlights
Many disciplines in clinical neurology and neuroscience benefit from the analysis of eye motion and gaze direction, which both rely on accurate pupil detection and localization as a prerequisite step
New method: For the first time, we propose to use a fully convolutional neural network (FCNN) for segmentation of the whole pupil area, trained on 3946 VOG images hand-annotated at our institute
Though trained on data from our institute, we demonstrate that the FCNN can generalize well to pupil segmentation in multiple datasets from other camera hardware and pupil tracking setups
Summary
Many disciplines in clinical neurology and neuroscience benefit from the analysis of eye motion and gaze direction, which both rely on accurate pupil detection and localization as a prerequisite step. It is clear that pupil detection and tracking techniques build a fundamental block for eye movement analysis, enabling advancement in neuroscientific research, clinical assessment and real life applications. A prerequisite for many eye tracking and video-oculography (VOG) methods is an accurate localization of the pupil. New method: For the first time, we propose to use a fully convolutional neural network (FCNN) for segmentation of the whole pupil area, trained on 3946 VOG images hand-annotated at our institute. We integrate the FCNN into DeepVOG, along with an established method for gaze estimation from elliptical pupil contours, which we improve upon by considering our FCNN's segmentation confidence measure. We provide our code and pre-trained FCNN model open-source and for free under www. github.com/pydsgz/DeepVOG
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.