Abstract
When a cognitive ability is assessed repeatedly, test scores and ability estimates are often observed to increase across test sessions. This phenomenon is known as the retest (or practice) effect. One explanation for retest effects is that situational test anxiety interferes with a testee’s performance during earlier test sessions, thereby creating systematic measurement bias on the test items (interference hypothesis). Yet, the influence of anxiety diminishes with test repetitions. This explanation is controversial, since the presence of measurement bias during earlier measurement occasions cannot always be confirmed. It is argued that people from the lower end of the ability spectrum become aware of their deficits in test situations and therefore report higher anxiety (deficit hypothesis). In 2014, a structural equation model was proposed that specifically allows the comparison of these two hypotheses with regard to explanatory power for the negative anxiety–ability correlation found in cross-sectional assessments. We extended this model for usage in longitudinal studies to investigate the impact of test anxiety on test performance and on retest effects. A latent neighbor-change growth curve was implemented into the model that enables an estimation of retest effects between all pairs of successive test sessions. Systematic restrictions on model parameters allow testing the hypothetical reduction in anxiety interference over the test sessions, which can be compared to retest effect sizes. In an empirical study with seven measurement occasions, we found that a substantial reduction in interference upon the second test session was associated with the largest retest effect in a figural matrices test, which served as a proxy measure for general intelligence. However, smaller retest effects occurred up to the fourth test administration, whereas evidence for anxiety-induced measurement bias was only produced for the first two test sessions. Anxiety and ability were not negatively correlated at any time when the interference effects were controlled for. Implications, limitations, and suggestions for future research are discussed.
Highlights
Taking the same or an alternate but difficult version of a cognitive ability test more than once has been observed to lead to an improvement in test performance—a phenomenon widely known as the retest effect [1] or practice effect [2]
Matrices test scores increased over time, reaching a maximum for the fourth test session and remaining relatively constant (M1 = 7.658; SD1 =3.110; M4 = 9.938; SD4 = 2.621)
It was applied in an empirical study where we explored retest effects occurring when taking a figural matrices test seven times
Summary
Taking the same or an alternate but difficult version of a cognitive ability test more than once has been observed to lead to an improvement in test performance—a phenomenon widely known as the retest effect [1] or practice effect [2]. The effect is psychometrically represented by a significantly increased (mean) test score or ability estimate upon a repeated measurement occasion. Retesting with cognitive ability tests can be crucial in clinical practice, personnel selection, and research scenarios. In the evaluation of training procedures (e.g., for mathematical abilities), passive control groups often pass a simple retesting design to control for practice effects emerging from mere repetition rather than the training program.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.