Abstract

This study is part of the Separation Management research program, whose goals include improving the FAA’s operational Conflict Probe function. Conflict Probe alerts air traffic controllers to conflicts, or situations where aircraft will be too close to each other. The present study is one link in a chain of research efforts. We used the results of a meta-analysis of Human Factors literature on automation accuracy (Rein, Masalonis, Messina, & Willems, in press) in conjunction with FAA mathematical studies on the accuracy of the current Conflict Probe prototype (Crowell, Fabian, Young, Musialek, & Paglione, 2011; Crowell & Young, 2012) and determined the acceptability of the prototype’s conflict detection performance. The present results will feed upcoming operational research including a human-in-the loop (HITL) simulation in which the prototype will be used, by helping establish whether the prototype was “good enough” to improve joint human-automation system performance. In addition, the present analysis enhances the methodology for determining the accuracy of the operational Conflict Probe, although for this paper we did not evaluate or report on operational data. We obtained data from the aforementioned FAA mathematical analysis, which reported the prototype’s performance on some of the standard SDT metrics. We further analyzed their data to generate values on a wider set of accuracy metrics, and compared the results to the findings of Rein et al. regarding how accurate automation “should” be, as well as considering the results from an operational/face validity perspective. We focused on reliability, a measure of overall percent correct by the automation, which has been used in past multi-experiment analyses of automation accuracy (Wickens & Dixon, 2007), and which Rein et al. found to have a relationship to system performance. With a “best case” estimate of Conflict Probe reliability, its performance far exceeds that needed to improve system performance. However, the estimate may have been too liberal from an operational perspective, because the input data included many correct rejections where the proximity of the aircraft was well beyond the distances defining a conflict. In such cases, the automation’s failure to alert would have been technically correct, but not useful to the controller, who would know without any automated assistance that no conflict was present. We conclude that the current Conflict Probe prototype is suitable for conducting the HITL research, but that additional scenario evaluation research should be run to determine for what kinds of conflicts and near-conflicts the automation can complement rather than duplicate controller skill. This scenario evaluation research, and related follow-up mathematical analysis, will answer the question “how accurate is Conflict Probe?” The HITL will answer “how accurate does Conflict Probe need to be?” These answers will be evaluated in conjunction with each other to improve the operational Conflict Probe.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call