A method of analyzing reaction time data in recognition memory is presented, which uses an explicit model of latency distributions. This distributional method allows us to distinguish between processes in a way that the traditional measure, mean latency, can not. The behavior of latency distributions is described, and four experiments are reported that show how recognition accuracy and latency vary with independent variables such as study and test position, rate of presentation, and list length. These data are used to develop and test the empirical model. The resulting analyses, together with functional relationships derived from the experimental data, are used to test several theories of recognition memory. The theories examined all show problems in light of these stringent tests, and general properties required by a model to account for the data are suggested. As well as arguing for distributional analyses of reaction time data, this paper presents a wide range of phenomena that any theory of recognition memory must explain. Over the last few years, researchers have been developing theories of recognition memory based not only on accuracy measures but also on latency measures. In this article, we consider latency measures in recognition memory. Results from four experiments are presented, and an empirical model for latency distributions is developed. Latency distributions are shown to provide much more information than can be obtained from mean latency, the most common dependent variable in reaction time measurements. From this, a strong case is made for the study of distributional properties by showing how some current theories are inadequate or wrong when examined in the light of distributiona l analyses. These recent theories are further evaluated using functional relationships extracted from results of the four experiments presented.
Read full abstract