Abstract

AbstractThis paper presents a learning theory pertinent to dynamic decision making (DDM) called instancebased learning theory (IBLT). IBLT proposes five learning mechanisms in the context of a decision‐making process: instance‐based knowledge, recognition‐based retrieval, adaptive strategies, necessity‐based choice, and feedback updates. IBLT suggests in DDM people learn with the accumulation and refinement of instances, containing the decision‐making situation, action, and utility of decisions. As decision makers interact with a dynamic task, they recognize a situation according to its similarity to past instances, adapt their judgment strategies from heuristic‐based to instance‐based, and refine the accumulated knowledge according to feedback on the result of their actions. The IBLT's learning mechanisms have been implemented in an ACT‐R cognitive model. Through a series of experiments, this paper shows how the IBLT's learning mechanisms closely approximate the relative trend magnitude and performance of human data. Although the cognitive model is bounded within the context of a dynamic task, the IBLT is a general theory of decision making applicable to other dynamic environments.

Highlights

  • Dynamic decision making (DDM) has been characterized by multiple, interdependent, and real-time decisions, occurring in an environment that changes independently and as a function of a sequence of actions (Brehmer, 1990; Edwards, 1962). Kersthold and Raaijmakers (1997)C

  • Retrieves alternatives based on the priority determined by a heuristic

  • In this paper we proposed that, decision making in DDM occurs by the acquisition, retrieval, and refinement of decision–situation–utility instances

Read more

Summary

Introduction

Dynamic decision making (DDM) has been characterized by multiple, interdependent, and real-time decisions, occurring in an environment that changes independently and as a function of a sequence of actions (Brehmer, 1990; Edwards, 1962). Kersthold and Raaijmakers (1997). Our interpretation is that overtime decision makers increasingly use their accumulated knowledge to make decisions and take advantage of their prior knowledge Based on these results, we have proposed that the most likely learning mechanism in DDM is the acquisition and retrieval of decision instances or examples. We have proposed that the most likely learning mechanism in DDM is the acquisition and retrieval of decision instances or examples This proposition is supported by theories of decision making under uncertainty (Gilboa & Schmeidler, 1995, 2000) as well as by observations of decision makers acting on time-constrained real world situations (Klein, Orasanu, Calderwood, & Zsambok, 1993; Pew & Mavor, 1998; Zsambok & Klein, 1997). We present a discussion of the results and our conclusions

Learning and skill acquisition in dynamic decision making
Instance-based learning in dynamic decision making
Recognition
Judgment
Choice
Feedback
Summary
CogIBLT: an ACT-R implementation of IBLT
Dynamic decision-making task
Structure of SDUs and the decision process in CogIBLT
Evaluation of alternatives
Simulation experiments
Human data collection
Data collection from CogIBLT
Experiment series 1: recognition process
Experiment series 2: judgment
Experiment series 3: choice
Experiment series 4: feedback
Summary of experiments
Process analysis
Average fit to decision rules
Instance similarity
Exploring individual data
Discussion of results
Concluding remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call