Abstract

Ten continuous, discrete, and hybrid models of recognition memory are considered in the traditional paradigm with manipulation of response bias via baserates or payoff schedules. We present an efficient method for computing the Fisher information approximation (FIA) to the normalized maximum likelihood index (NML) for these models, and a relatively efficient method for computing NML itself. This leads to a comparative evaluation of the complexity of the different models from the minimum-description-length perspective. Furthermore, we evaluate the goodness of the approximation of FIA to NML. Finally, model-recovery studies reveal that use of the minimum-description-length principle consistently identifies the true model more frequently than AIC and BIC. These results should be useful for research in recognition memory, but also in other fields (such as perception, reasoning, working memory, and so forth) in which these models play a role.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call