As a nonverbal cue, gaze plays a critical role in communication, expressing emotions and reflecting mental activity. It has widespread applications in various fields. Recently, the appearance-based gaze estimation method, which utilizes CNN (convolutional neural networks), has rapidly improved the accuracy and robustness of gaze estimation algorithms. Due to their insufficient ability to capture global relationships, the present accuracy of gaze estimation methods in unconstrained environments, has the potential for improvement. To address this challenge, the focus of this paper is to enhance the accuracy of gaze estimation, which is typically measured by mean angular error. In light of Transformer’s breakthrough in image classification and target detection tasks, and the need for an efficient network, the Transformer-enhanced-CNN method is a suitable choice. This paper proposed a novel model for 3D gaze estimation in unconstrained environments, based on the Bottleneck Transformer block and multi-loss methods. Our designed network (BoT2L-Net), incorporates self-attention through the BoT block, utilizing two identical loss functions to predict the two gaze angles. Additionally, the back-propagation network was combined with classification and regression losses, to improve the network’s accuracy and robustness. Our model was evaluated on two commonly used gaze datasets: Gaze360 and MPIIGaze, achieving mean angular errors of 11.53° and 9.59° for front 180° and front-facing gaze angles, respectively, on the Gaze360 testing set, and a mean angular error of 3.97° on the MPIIGaze testing set, outperforming the CNN-based gaze estimation method. The BoT2L-Net model proposed in this paper performs well on two publicly available datasets, demonstrating the effectiveness of our approach.
Read full abstract