Abstract

Advanced fire control technologies that utilize computer vision-guided target recognition will enable dismounted soldiers with augmented reality displays, such as the integrated visual augmentation system, enhanced situational awareness. Here we describe a virtual reality framework and environment for the design and evaluation of computer vision algorithms and augmented reality interfaces intended to enhance dismounted soldier situational awareness. For training models, synthetic image datasets of targets in virtual environments can be generated in tandem with neural network learning. To evaluate models under simulated operational environments, a dismounted soldier combat scenario was developed. Trained models are used to process input from a “virtual camera” in-line with a rifle-mounted telescopic sight. Augmented reality overlays are projected over the sight’s optics, modeling the function of current state-of-the-art holographic displays. To assess the impact of these capabilities on situational awareness, performance metrics and physiological monitoring were integrated into the system. To investigate how sensors beyond visible wavelength optical imaging may be leveraged to enhance this capability, particularly in degraded visual environments, the virtual camera framework was extended to introduce methods for simulating multispectral infrared imaging. Thus, this virtual reality framework provides a platform for evaluating multispectral computer vision algorithms under simulated operational conditions, as well as iteratively refining the design of augmented reality displays. Improving the design of these components in virtual reality provides a rapid and cost-effective method for refining specifications and capabilities toward a field-deployable system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call