Abstract

The AMBR Model Comparison Project: Round III — Modeling Category Learning Session Organizers: Kevin A. Gluck (kevin.gluck@williams.af.mil) Air Force Research Laboratory 6030 S. Kent St., Mesa, AZ 85212 USA Richard W. Pew (pew@bbn.com) BBN Technologies 10 Moulton St., Cambridge, MA 02138 USA The goal of the Agent-based Modeling and Behavior Representation (AMBR) Model Comparison Project is to advance the state of the art in cognitive modeling. It is organized as a series of model comparisons, moderated by a team from BBN Technologies. In each comparison, a challenging behavioral phenomenon is chosen for study. Data are collected from humans performing the task. Cognitive models representing different modeling architectures are created, run on the task, and then compared to the collected data. The current effort focuses on models of category learning in a dynamic, dual-task environment. Model comparisons such as this, especially with directly comparable human data are rare. While models of category learning are commonplace, the fact that these are models of integrative performance, not just models of category learning in isolation, makes this set of presentations unique. Experiment Design and Comparison of Human and Model Data David Diller (ddiller@bbn.com) Yvette Tenney (ytenney@bbn.com) BBN Technologies This experiment involved a classic concept learning task embedded in an air traffic control situation. Subjects had to learn to make correct decisions to accept or reject altitude change requests, based on three bi-variate properties of the aircraft (percent fuel remaining, aircraft size, and turbulence level). A novel feature of the experiment was the addition of multi-tasking to this concept learning paradigm. In addition to the altitude change requests (the concept learning task), the participant had to hand-off a number of aircraft to adjoining controllers (secondary task). The design consisted of 9 conditions, defined by 3 category structures and 3 workload levels. The three category structures, borrowed from Shepard, Hovland, and Jenkins (1961), were: single attribute relevant (Type I), a single-attribute rule plus exceptions (Type III), and no rule (Type VI). The three workload levels consisted of 0, 12, or 16 required handoffs, in addition to the 16 altitude requests. It was expected that both category structure and workload level would affect performance. There were 8 scenarios, or trials, lasting ten minutes each. One hour of training on the mechanics of the tasks preceded the trials. Ninety humans and four different human performance models described in subsequent abstracts were run through the scenarios. The interface, consisting of a radar screen with moving aircraft and action buttons, was designed to accommodate both humans and models. Humans were randomly assigned to one condition (ten per condition). The models were run one or more times in each condition. All of the modelers were given the human learning data as soon as they were collected, and while the models were still under development. It was expected, therefore, that they would fit the data fairly well. However, a transfer test (for which the modelers were not given the human data in advance) provides an opportunity to test the generalizability of the models predictions. Results for both humans and models will be presented on the effects of category structure and workload over trials. Human data and model data are available for the following measures: learning curves (probability of error) on the concept learning task, performance errors on the secondary task (missed and incorrect actions), reaction time on both the concept learning and secondary task, self rated workload ratings (collected from models too!), and self- reports on rule discovery and other strategies on the concept task (humans only). This presentation will set the stage for the modelers to describe the mechanisms and assumptions that allow their models to replicate the results. An EPIC-Soar Model of Concurrent Performance on a Category Learning and a Simplified ATC Task Ron S. Chong (rchong@gmu.edu) George Mason University Robert E. Wray (wray@soartech.com) Soar Technology, Inc. During the first phase of the AMBR project, we developed a model of a simplified en-route air traffic control task. That model was built using the EPIC-Soar architecture, an integration of the perceptual and motor systems of the EPIC architecture with Soar, a learning cognitive architecture. The task to be modeled for the current phase of AMBR is the combination of the same ATC task with a new concept acquisition task. Our approach to building the new model has been to reuse, in a modular fashion, previous Soar models for the subtasks. The ATC model is essentially the same as that of the previous AMBR phases. To produce the learning behavior, we have incorporated an existing process model of concept learning called SCA (symbolic concept acquisition). SCA was developed in Soar and has been

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.