Abstract

Efforts are underway across the defense and commercial industries to develop cross-reality (XR), multi-user operation centers in which human users can perform their work while aided by intelligent systems. At their core is the objective to accelerate decision-making and improve efficiency and accuracy. However, presenting data to users in an XR, multi-dimensional environment results in a dramatic increase in extraneous information density. Intelligent systems offer a potential mechanism for mitigating information overload while ensuring that critical and anomalous data is brought to the attention of the human users in an immersive interface. This paper describes such a prototype system that combines real and synthetic motion sensors which, upon detection of an event, send a captured image for processing by a YOLO cluster. Finally, we describe how a future system can integrate a decision-making component for evaluation of the resulting metadata to determine whether to inject the results into an XR environment for presentation to human users.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.