Abstract

ABSTRACT Aptitude testing is used to select candidates with the greatest potential for professional interpreter training. Implicit in this practice is the expectation that aptitude test scores predict future performance. As such, the predictive validity of score-based inferences and decisions constitutes an important rationale for aptitude testing. Although researchers have provided predictive validity evidence for different aptitudinal variables, very little research has examined the substantive meaning and robustness of such evidence. We therefore conducted this systematic review to investigate or interrogate the methodological rigour of quantitatively-based prospective cohort studies of aptitude for interpreting, focusing on the substantive meaning, psychometric soundness, and statistical analytic rigour underpinning their predictive validity evidence. Our meta-evaluation of 18 eligible studies, identified through a rigorous search and screening process, shows a diverse array of practices in the operationalisation, analysis, and reporting of aptitude tests, interpreting performance assessments, and related validity evidence. Main patterns include the collection of mostly single-site data (i.e., from a single institution), use of self-designed instruments for testing aptitude, and under-reporting of key information on measurement and statistical procedures. These findings could help researchers better interpret existing validity evidence and design future research on aptitude testing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call