Many universities rely on data gathered from tests that are low stakes for examinees but high stakes for the various programs being assessed. Given the lack of consequences associated with many collegiate assessments, the construct-irrelevant variance introduced by unmotivated students is potentially a serious threat to the validity of the inferences that institutions can make from their assessments. Two approaches to evaluating examinee motivation are discussed in this article: a global paper-and-pencil self-report measure of students' motivation across all tests completed during the course of a testing session, and a computer-based method that non-intrusively measures the amount of time students spend on each item in a test. This study presents evidence that the two motivation filtering methods provide similar filtered aggregate test scores, although more data was removed using the global paper-and-pencil self-report technique. Consequently, those interested in motivation filtering may not need to employ computer-based testing techniques but might instead effectively filter data from unmotivated students using self-report measures.