Abstract

AbstractA slowdown or a speedup in response times across experimental conditions can be taken as evidence of online deployment of knowledge. However, response-time difference measures are rarely evaluated on their reliability, and there is no standard practice to estimate it. In this article, we used three open data sets to explore an approach to reliability that is based on mixed-effects modeling and to examine model criticism as an outlier treatment strategy. The results suggest that the model-based approach can be superior but show no clear advantage of model criticism. We followed up these results with a simulation study to identify the specific conditions in which the model-based approach has the most benefits. Researchers who cannot include a large number of items and have a moderate level of noise in their data may find this approach particularly useful. We concluded by calling for more awareness and research on the psychometric properties of measures in the field.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call