Abstract

Multi-spectral imagery can enhance decision-making by supplying multiple complementary sources of information. However, overloading an observer with information can deter decision-making. Hence, it is critical to assess multi-spectral image displays using human performance. Accuracy and response times (RTs) are fundamental for assessment, although without sophisticated empirical designs, they offer little information about why performance is better or worse. Systems factorial technology (SFT) is a framework for study design and analysis that examines observers’ processing mechanisms, not just overall performance. In the current work, we use SFT to compare a display with two sensor images alongside each another with a display in which there is a single composite image. In our first experiment, the SFT results indicated that both display approaches suffered from limited workload capacity and more so for the composite imagery. In the second experiment, we examined the change in observer performance over the course of multiple days of practice. Participants’ accuracy and RTs improved with training, but their capacity limitations were unaffected. Using SFT, we found that the capacity limitation was not due to an inefficient serial examination of the imagery by the participants. There are two clear implications of these results: Observers are less efficient with multi-spectral images than single images and the side-by-side display of source images is a viable alternative to composite imagery. SFT was necessary for these conclusions because it provided an appropriate mechanism for comparing single-source images to multi-spectral images and because it ruled out serial processing as the source of the capacity limitation.

Highlights

  • Information from non-visible parts of the electromagnetic spectrum is beneficial for determining different types of environmental information in many operational settings (Hall & Llinas, 1997)

  • We suggest the use of a cognitive-theorydriven approach based on performance, systems factorial technology (SFT), for evaluating image fusion approaches, for comparing algorithmic to cognitive fusion

  • Despite the mixed effects we found with raw response times (RTs), the capacity coefficient indicated algorithmic fusion led to more limited capacity performance than cognitive fusion, despite requiring participants to attend to only one image

Read more

Summary

Introduction

Information from non-visible parts of the electromagnetic spectrum is beneficial for determining different types of environmental information in many operational settings (Hall & Llinas, 1997). Long-wave infrared (LWIR) emissions are useful for detecting heat information Together, infrared and visible sensors may supply the operator with complementary information and aid in a task such as determining a target’s location (e.g., a person) relative to an object in the scene (Toet, Ljspeert, Waxman, & Aguilar, 1997). There are several alternative ways to present an observer with multiple sensor images simultaneously. A common family of approaches, which we refer to as algorithmic fusion, is to combine relevant information from two sensor images into one composite image (Burt & Kolczynski, 1993). Information from each sensor could be displayed in two separate images. Presenting all available information moves the choice of relevant information to the operator rather than relying on an algorithm to detect useful sensor information

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call