We revisit the problem of iris tracking with RGB cameras, aiming to obtain iris contours from captured images of eyes. We find the reason that limits the performance of the state-of-the-art method in more general non-cooperative environments, which prohibits a wider adoption of this useful technique in practice. We believe that because the iris boundary could be inherently unclear and blocked, as its pixels occupy only an extremely limited percentage of those on the entire image of the eye, similar to the stars hidden in fireworks, we should not treat the boundary pixels as one class to conduct end-to-end recognition directly. Thus, we propose to learn features from iris and sclera regions first, and then leverage entropy to sketch the thin and sharp iris boundary pixels, where we can trace more precise parameterized iris contours. In this work, we also collect a new dataset by smartphone with 22 K images of eyes from video clips. We annotate a subset of 2 K images, so that label propagation can be applied to further enhance the system performance. Extensive experiments over both public and our own datasets show that our method outperforms the state-of-the-art method. The results also indicate that our method can improve the coarsely labeled data to enhance the iris contour’s accuracy and support the downstream application better than the prior method.
Read full abstract