Abstract

Helmet-mounted display (HMD) systems allow aircraft pilots to aim at targets by using head postures. However, the direct use of helmet orientation to indicate the aiming direction ignores human eye movements, which are more flexible and efficient for interaction. Since the opaqueness of the helmet goggle blocks the sight of external cameras to capture facial or eye images of the pilots, and traditional eye feature extraction methods may fail when encountering conditions such as poor lighting, occlusion, and shaking, which are common on fighter aircrafts. In this work, an eye gaze based aiming solution that adapts to pilots wearing HMDs is proposed, and a deep learning-based method is proposed to extract eye features robustly. The prototype experiments demonstrate the ability to pick and aim at targets in real-time (60FPS) and are capable to accurately locate the target markers on a screen with an average error of fewer than 2 degrees. Conclusively, the proposed method performs the tasks of eye feature extraction on real-person imagery and the estimation of the 3D aiming direction for users with helmets, displaying competitive results with similar research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call