Eye-Gaze Tracking Technology
Eye-Gaze Tracking Technology
- Research Article
3
- 10.12795/pixelbit.88106
- Jan 1, 2021
- Pixel-Bit, Revista de Medios y Educación
Eye-Gaze Tracking Technology (EGTT) is used most commonly as a communication tool for learners with profound and multiple learning difficulties (PMLD). This research investigates the use of EGTT as an assessment tool to provide additional evidence to confirm teacher assessment. The paper contributes to how teachers can address the barriers faced upon assessing students with PMLD through the use EGTT. Data was obtained from a sample of four students with PMLD and physical disabilities located within a special needs school. The qualitative methodology ensured a triangulation of data collection, which included analysis of learners’ heat maps, parent questionnaires and observations of teaching via video capture. It was found that the eye-tracking data provided information on individual learner’s engagement with the learning objectives, which otherwise could not have been communicated. The innovative technology provided an independent data source to inform the teacher’s assessment of the learner’s cognitive abilities. Overall EGTT enabled a more accurate method of teacher assessment of PMLD students’ abilities, giving teachers more confidence with their judgements by providing robust evidence to underpin their professional practice. Where schools want to invest in tools which deliver, this research can guide SEN leaders/schools in deciding potential investment in EGTT equipment and how to use it as an assessment tool.
- Book Chapter
4
- 10.1007/978-3-319-91464-0_36
- Jan 1, 2018
We propose a method using eye-gaze tracking technology and machine learning for the analysis of the reading section of the Scholastic Aptitude Test (SAT). An eye-gaze tracking device tracks where the reader is looking on the screen and provides the coordinates of the gaze. This collected data allows us to analyze the reading patterns of test takers and discover what features enable test takers to score higher. Using a machine learning approach, we found that the time spent on the passage at the beginning of the test (in minutes), number of times switching between the passage and the questions, and the total time spent doing the reading test (in minutes) have the greatest impact in distinguishing higher scores from lower scores.
- Research Article
3
- 10.7507/1001-5515.201605035
- Oct 1, 2017
- Sheng wu yi xue gong cheng xue za zhi = Journal of biomedical engineering = Shengwu yixue gongchengxue zazhi
In current domestic research on laparoscopic training, researchers usually consider instrument movement path in the hand-eye coordination relationship. However, they ignore the information contained in visual cues by which could guide and control instrument movements. Studies in other areas have shown that trainers can improve their perceptual-motor skills by gaze training. This paper was designed to examine the effectiveness of eye gaze tracking technology in laparoscopic training and to analyze gaze strategy of the subjects in different training methods. The Tobii X1 Light Eye Tracker was used to track the gaze position of subjects when they were performing the two-handed transferring task in box trainer, and to obtain parameters related to gaze strategy including the efficiency of task completion, as well as visual search, visual processing and observation transfer analysis based on Markov chain model. The results showed that the completion time during the last training in gaze training group was decreased by 101.5 s comparing to the first training. Compared with video training group, gaze strategy of gaze training group has a significant change, such as fixation and saccade duration rate was increased by 38%, fixation duration on target area was increased, and saccade amplitude increased by 0.58°, and the probability of the fixation point transferring to equipment decreased by 15%. The results demonstrated that eye gaze tracking technology can be used in laparoscopic training, and can improve the subjects' skills and shorten the learning curve by learning gaze strategies of experts.
- Conference Article
6
- 10.1109/icsp48669.2020.9321075
- Dec 6, 2020
Applications that use human gaze have become increasingly more popular in the domain of human-computer interfaces, and advances in eye gaze tracking technology over the past few decades have led to the development of promising gaze estimation techniques. In this paper, a low-cost, in-house video camera-based gaze tracking system was developed, trained and evaluated. Seminal gaze detection methods constrained the application space to indoor conditions, and in most cases techniques required intrusive hardware. More modern gaze detection techniques try to eliminate the use of any additional hardware to reduce monetary cost as well as undue burden to the user, all the while maintaining accuracy of detection. In this work, image acquisition was achieved using a low-cost USB web camera mounted at a fixed position on the viewing screen or laptop. In order to determine the point of gaze, the Viola Jones face detection algorithm is used to extract facial features from the image frame. The gaze is then calculated using image processing techniques to extract gaze features, namely related to the image position of the pupil. Thousands of images are classified and labeled to form an in-house database. A multi-class Support Vector Machine (SVM) was trained and tested on this data set to distinguish point of gaze from input face image. Cross validation was used to train the model. Confusion matrices, accuracy, precision, and recall are used to evaluate the performance of the classification model. Evaluation of the proposed appearance-based technique using two different kernel functions is also assessed in detail.
- Research Article
30
- 10.1007/s12193-010-0054-0
- Nov 1, 2009
- Journal on Multimodal User Interfaces
Auditory prominence is defined as when an acoustic segment is made salient in its context. Prominence is one of the prosodic functions that has been shown to be strongly correlated with facial movements. In this work, we investigate the effects of facial prominence cues, in terms of gestures, when synthesized on animated talking heads. In the first study, a speech intelligibility experiment is conducted, speech quality is acoustically degraded and the fundamental frequency is removed from the signal, then the speech is presented to 12 subjects through a lip synchronized talking head carrying head-nods and eyebrows raise gestures, which are synchronized with the auditory prominence. The experiment shows that presenting prominence as facial gestures significantly increases speech intelligibility compared to when these gestures are randomly added to speech. We also present a follow-up study examining the perception of the behavior of the talking heads when gestures are added over pitch accents. Using eye-gaze tracking technology and questionnaires on 10 moderately hearing impaired subjects, the results of the gaze data show that users look at the face in a similar fashion to when they look at a natural face when gestures are coupled with pitch accents opposed to when the face carries no gestures. From the questionnaires, the results also show that these gestures significantly increase the naturalness and the understanding of the talking head.
- Book Chapter
12
- 10.1007/978-3-642-18184-9_6
- Jan 1, 2011
In this chapter, we investigate the effects of facial prominence cues, in terms of gestures, when synthesized on animated talking heads. In the first study a speech intelligibility experiment is conducted, where speech quality is acoustically degraded, then the speech is presented to 12 subjects through a lip synchronized talking head carrying head-nods and eyebrow raising gestures. The experiment shows that perceiving visual prominence as gestures, synchronized with the auditory prominence, significantly increases speech intelligibility compared to when these gestures are randomly added to speech.We also present a study examining the perception of the behavior of the talking heads when gestures are added at pitch movements. Using eye-gaze tracking technology and questionnaires for 10 moderately hearing impaired subjects, the results of the gaze data show that users look at the face in a similar fashion to when they look at a natural face when gestures are coupled with pitch movements opposed to when the face carries no gestures. From the questionnaires, the results also show that these gestures significantly increase the naturalness and helpfulness of the talking head.Keywordsvisual prosodyprominencestressmultimodalgazehead-nodeyebrowsvisual synthesistalking heads
- Book Chapter
1
- 10.1007/978-3-030-90179-0_1
- Jan 1, 2021
This establishes a basic template for the interface design of augmentative and alternative communication (AAC) software that use eye-gaze tracking technology as input. The main aim is to highlight desirable and undesirable characteristics in the user interface regarding usability. To do so, we conducted a systematic evaluation of the commercially available products and from that drawn design criteria, which combined with modern usability requirements, were used to develop an user interface prototype. This prototype was refined through evolutionary prototyping methodology. Several usability tests were performed to evaluate the design regarding: consistency, comprehensibility, clarity, flexibility, efficiency, consistency and potential for customization. The feedback from the users was integrated in subsequent iterations of the design. The prototyping process is used as the process for a critical analysis to address drawbacks in standard designs. As result, we provide a basic template that aggregate qualities of previous designs and minimizes drawbacks whenever possible, and that can be used as guideline for further development of tools for AAC.
- Research Article
- 10.7860/jcdr/2025/78212.21038
- May 1, 2025
- JOURNAL OF CLINICAL AND DIAGNOSTIC RESEARCH
Introduction: Eye gaze tracking is essential in understanding non verbal communication, human-computer interaction and cognitive responses. Its applications range from healthcare to consumer behaviour analysis and gaming. The evolution of eye gaze tracking technology and its increasing adoption highlight its significance across diverse domains. Aim: To conduct a bibliometric analysis of eye gaze tracking research over the past two decades (2005-2024), exploring publication trends, collaboration patterns, key contributors and emerging research themes. Materials and Methods: The present review was a bibliometric review in which data were extracted from the Web of Science Core Collection, amounting to 9,773 peer-reviewed articles. The bibliometric analysis tools used were VOSviewer and Biblioshiny for the investigation of co-authorship, citation and co-occurrence network analysis. The critical conceptual and intellectual trends in research on eye gaze tracking were identified, focusing on publication output and global collaborations in research. Results: Findings showed that more than 62% of publications were published within the period 2018-2024. The United States of America (USA) accounted for the majority of research contributions, followed by England and Germany. Applications in healthcare, marketing and cognitive sciences were evident, with “autism” being a focus of critical importance. Conclusion: Eye gaze tracking has seen rapid growth since 2018, with an increasing focus on Artificial Intelligence (AI)- assisted applications in healthcare, assistive technology and marketing. Emerging trends such as deep learning-based eye movement prediction and gaze-driven user experiences are shaping future developments. Upcoming research is expected to integrate AI, neuroscience and human-computer interaction to advance diagnostics and gaze-based security solutions.
- Research Article
752
- 10.1016/j.cviu.2004.07.010
- Nov 11, 2004
- Computer Vision and Image Understanding
Eye gaze tracking techniques for interactive applications
- Conference Article
3
- 10.1109/cisp.2011.6100019
- Oct 1, 2011
Eye-gaze tracking technology provides us with an unconventional way of Human Computer Interaction, and brings a lot of convenience with many practical applications and industrial products. The main focus of eye-gaze tracking is to calculate the model of gaze directions and the gaze coordinates. Based on computer vision, the key problem in the modeling process is the feature description. In this paper, a new descriptor, Local and Scale Integrated Feature (LoSIF) is proposed to extract eye-gaze movement features based on non-intrusive system, allowing slight head movements. The feature descriptor depends on two-level Haar wavelet transform, and carries on a combination of multi-resolution characteristics and effective dimension reduction algorithm, to achieve local and scale eye movement characterization. We use Support Vector Regression to estimate the mapping function between the appearance of the eyes and the corresponding gaze directions. As the experiment results show, the accuracy is within an acceptable range of 1 degree.
- Research Article
330
- 10.1109/tbme.2007.895750
- Dec 1, 2007
- IEEE Transactions on Biomedical Engineering
Most available remote eye gaze trackers have two characteristics that hinder them being widely used as the important computer input devices for human computer interaction. First, they have to be calibrated for each user individually; second, they have low tolerance for head movement and require the users to hold their heads unnaturally still. In this paper, by exploiting the eye anatomy, we propose two novel solutions to allow natural head movement and minimize the calibration procedure to only one time for a new individual. The first technique is proposed to estimate the 3-D eye gaze directly. In this technique, the cornea of the eyeball is modeled as a convex mirror. Via the properties of convex mirror, a simple method is proposed to estimate the 3-D optic axis of the eye. The visual axis, which is the true 3-D gaze direction of the user, can be determined subsequently after knowing the angle deviation between the visual axis and optic axis by a simple calibration procedure. Therefore, the gaze point on an object in the scene can be obtained by simply intersecting the estimated 3-D gaze direction with the object. Different from the first technique, our second technique does not need to estimate the 3-D eye gaze directly, and the gaze point on an object is estimated from a gaze mapping function implicitly. In addition, a dynamic computational head compensation model is developed to automatically update the gaze mapping function whenever the head moves. Hence, the eye gaze can be estimated under natural head movement. Furthermore, it minimizes the calibration procedure to only one time for a new individual. The advantage of the proposed techniques over the current state of the art eye gaze trackers is that it can estimate the eye gaze of the user accurately under natural head movement, without need to perform the gaze calibration every time before using it. Our proposed methods will improve the usability of the eye gaze tracking technology, and we believe that it represents an important step for the eye tracker to be accepted as a natural computer input device.
- Research Article
18
- 10.5555/1061935.1649095
- Apr 1, 2005
- Computer Vision and Image Understanding
Eye gaze tracking techniques for interactive applications
- Book Chapter
8
- 10.1007/978-3-319-22701-6_28
- Jan 1, 2015
With eye gaze tracking technology entering the consumer market, there is an increased interest in using it as an input device, similar to the mouse. This holds promise for situations where a typical desk space is not available. While gaze seems natural for pointing, it is inherently inaccurate, which makes the design of fast and accurate methods for clicking targets (“click alternatives”) difficult. We investigate click alternatives that combine gaze with a standard keyboard (“gaze & key click alternatives”) to achieve an experience where the user’s hands can remain on the keyboard all the time. We propose three novel click alternatives (“Letter Assignment”, “Offset Menu” and “Ray Selection”) and present an experiment that compares them with a naive gaze pointing approach (“Gaze & Click”) and the mouse. The experiment uses a randomized, realistic click task in a web browser to collect data about click times and click accuracy, as well as asking users for their preference. Our results indicate that eye gaze tracking is currently too inaccurate for the Gaze & Click approach to work reliably. While Letter Assignment and Offset Menu were usable and a large improvement, they were still significantly slower and less accurate than the mouse.
- Conference Article
3
- 10.1109/rtcsa.2014.6910542
- Aug 1, 2014
Gaze tracking is the process of measuring gaze point or the motion of an eye relative to the head. Gaze tracking technique provides us a brand new way of human computer interaction. In addition, eye gaze tracking can be also applied to support seriously disabled people in using computer. In this paper, we explored the use of eye gaze tracking technology on a tablet device, designed and implemented an eye tracking system on an iPad device. In our system, gaze estimation is based on analyzing the appearances of eyes which are retrieved by the built-in camera on the iPad. Artificial neural networks are employed to estimate the location of the user gaze from the eye region image. The results indicate that it is possible to obtain an accuracy of 82.5% with the proposed system.
- Conference Article
152
- 10.1109/cvpr.2005.148
- Jun 20, 2005
Most available remote eye gaze trackers based on pupil center corneal reflection (PCCR) technique have two characteristics that prevent them from being widely used as an important computer input device for human computer interaction. First, they must often be calibrated repeatedly for each individual; second, they have low tolerance for head movements and require the user to hold the head uncomfortably still. In this paper, we propose a solution for the classical PCCR technique that simplify the calibration procedure and allow free head movements. The core of our method is to analytically obtain a head mapping function to compensate head movement. Specifically, the head mapping function allows to automatically map the eye movement measurement under an arbitrary head position to a reference head position so that the gaze can be estimated from the mapped eye measurement with respect to the reference head position. Furthermore, our method minimizes the calibration procedure to only one time for each individual. Our proposed method significantly improves the usability of the eye gaze tracking technology, which is a major step for eye tracker to be accepted as a natural computer input device.