Abstract

This article presents a formal statistical model for assessing the word frequency effect in recognition memory. This topic is relevant because word frequency is the best predictor of performance in recognition memory tasks. Signal Detection Theory was applied using high-frequency and low-frequency words as item-signals. Signal Detection Theory test assumes orthogonality of responses: hits, false alarms, correct rejections, and incorrect rejections. Ninety-six adult male and female students participated in two experiments: one conducted in the laboratory and the other in the class-room. The selected words for memory contained 3 to 5 letters and 1 or 2 syllables to control for length. Significant differences were found between high-frequency and low-frequency words in the number of false alarms for the two experiments. The differences were statistically significant in two experiments. The Cohen effect size was 0.6 and 0.45 respectively. The word frequency effect in first- and second-experiments was F (1, 46) = 4.13, MCE. = 2.34, p = 0.003 and F (1, 46) = 3.71, MCE. = 12.36, p = 0. 01 respectively. A formal model is presented based on the Receiver Operating Characteristic data to assess data trends for high- and low frequency words. Two differentiated models were obtained: a continuous model based on high frequency stimuli and a threshold model based on low frequency stimuli.

Highlights

  • Under equivalent experimental conditions, low frequency words are better recognized than high frequency words

  • Significant differences were found between HF and LF words in the number of false alarms: analysis of variance (ANOVA), F (1, 46) = 4.13, MCE. = 2.34, p = 0.003

  • A familiarity process is indicated by the linear shape of the z-Receiver Operating Characteristic (ROC) data and HF words

Read more

Summary

Introduction

Low frequency words are better recognized than high frequency words. One level is derived from variable X, (or s in Signal Detection Theory (SDT), and the other c from criterion (criterion or threshold in SDT) This procedure can be generalized by the successive replacement of elements in order to obtain an equivalent general linear model [8,9]. We hypothesized that z-ROC data would be discontinuous (non-linear) This hypothesis would be plausible in experimental designs that allow sufficient conscious processing time (e.g., >500 ms per word) to produce learning [14]. Our experimental hypothesis assumed that there would be more false alarms in HF word recognition memory and fewer false alarms in LF word recognition memory This hypothetical outcome would be detected via the different trends in the distribution of the mathematical functions for each z-ROC data. The z-ROC data would show that LF stimuli would cause breaks in continuity in the trend, thereby producing threshold effects (a U-shaped z-ROC data) suppress, whereas HF stimuli would be associated with z-ROC data resembling straight lines

Material and Methods
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call