In social-science research and in program evaluation, multimethod designs are well known, but few have been applied in the validation of writing tests. Yet there are good reasons to evaluate a testing program along many measures: to be of use to a variety of stakeholders, to be sensitive to the presence of conflicting perspectives, to seek convergent findings among studies with different biases, and to probe a social context that is complex, fluid, and provisional. At Washington State University, validation of the writing-placement examination system followed multiple-inquiry lines, and therefore stands as a test case. Analysis of the findings from WSU shows the value of multiple studies to different groups: to the students, both native and nonnative speakers of English, both transfer and non-transfer; to the writing-course teachers; to the other faculty on campus; to the cross-campus corps of raters; to the chairs and heads of programs; and to the board of regents. In multiple inquiry, a key role is the writing-program administrator, who probably should be included on any validation team. Although multiple inquiry proved sometimes problematic at Washington State, it also proved unusually productive of recommendations for improvement.