Abstract

Assessment of test reliability and validity is often complex. Although tests of correlation are frequently used to measure interest agreement, such indexes measure only the strength of the linear relationship between variables and may not provide an accurate assessment of the correspondence between test results. Inspection of interest differences, either visually or using the r1, may provide a better indicator of the correspondence between test results and accounts for measurement biases. Strength of association between categorical variables can be measured using related tests such as the kappa statistic. Test reliability may be assessed by retesting, but this is not practical in many cases when subject memory or learning may confound the results of repeated examinations. Several methods exist for determining reliability from a single test administration and for assessing the correspondence between answers to homogeneous test questions. In the continuation article (Part B) on this subject, the concept and assessment of validity will be examined in more detail, and techniques for maximizing the reliability and validity of questionnaires will be discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call