Gaze estimation is a fundamental task in the field of computer vision, which determines the direction a person is looking at. With advancements in Convolutional Neural Networks (CNNs) and the availability of large-scale datasets, appearance-based models have made significant progress. Nonetheless, CNNs exhibit limitations in extracting global information from features, resulting in a constraint on gaze estimation performance. Inspired by the properties of the Fourier transform in signal processing, we propose the Frequency-Spatial Interaction network for Gaze estimation (FSIGaze), which integrates residual modules and Frequency-Spatial Synergistic (FSS) modules. To be specific, its FSS module is a dual-branch structure with a spatial branch and a frequency branch. The frequency branch employs Fast Fourier Transformation to transfer a latent representation to the frequency domain and applies adaptive frequency filter to achieve an image-size receptive field. The spatial branch, on the other hand, can extract local detailed features. Acknowledging the synergistic benefits of global and local information in gaze estimation, we introduce a Dual-domain Interaction Block (DIB) to enhance the capability of the model. Furthermore, we implement a multi-task learning strategy, incorporating eye region detection as an auxiliary task to refine facial features. Extensive experiments demonstrate that our model surpasses other state-of-the-art gaze estimation models on three three-dimensional (3D) datasets and delivers competitive results on two two-dimensional (2D) datasets.
Read full abstract