Abstract

We propose a toolkit for objectively evaluating the effectiveness of new technologies for improving human cognitive performance. In complex socio-technical systems such as nuclear power generation and air traffic management, garden path scenarios have been effectively used to anchor initial inaccurate hypotheses that are then monitored for movement towards the correct hypotheses as increasing evidence over time makes it easier to change the diagnosis. The time to come to an accurate diagnosis in a well-crafted simulation scenario with an initial inaccurate anchor hypothesis is an objective, repeatable measure of performance for the macrocognition function of sensemaking. The time to verbalize the recognition of critical cues, which becomes increasingly less subtle over time, as well as the time to move from the inaccurate diagnosis at one of the correct diagnoses in the complete diagnostic set can all be reliably measured and compared in an across-subject study design. Modifications with conceptually matched scenarios using within-subject designs can also be employed if asymmetric learning effects are managed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call