Abstract

AbstractDue to the high cost of eye-tracking systems based on pupillary corneal reflections, efforts to develop a webcam-based eye-tracking system have increased to provide an affordable alternative to disabled people in recent years. However, due to the camera specification and location, ambient light changes and positional changes of the users, the gazing point of the eyes has not yet been determined precisely by such a system. Therefore, only 8 different gaze directions or up to 10 gaze regions could be detected in the previous webcam-based human–computer interaction studies. In this study, a novel gaze input system has been proposed to make the best use of the limited performance of webcam-based eye tracking and offer an economical alternative for disabled people. To reduce the impact of head movements, the webcam has been mounted to an ordinary glasses frame and positioned in front of the eye. For estimation of the gaze regions, a feature-based method (Hough transformation) was used by considering the circular shape of the iris and the contrast between the iris and sclera. The central coordinates of the iris image captured by the webcam were given to the k-nearest neighbor classifier. We performed a series of experiments with 20 subjects to determine the performance of the system and to investigate the effect of ambient light on the system’s accuracy. The 23 regions that were gazed at by subjects were determined with an average accuracy of 99.54%. When the ambient light level was reduced by half, the accuracy decreased to 94.74%. As a result, it has been found that the proposed prototype allows more accurate recognition of a larger number of regions on the screen than previous webcam-based systems. It has been observed that system performance decreases if the ambient light is reduced byhalf.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call