Abstract

Despite evidence that the choice of dependent measures can significantly influence design sensitivity, many evaluators default to traditional measures that may be insensitive to intervention effects. This paper describes an innovative set of test development guidelines designed to select items and create aggregate scales that are better able to detect program effects. The application of these Intervention Item Selection Rules (IISRs) is illustrated during the initial development of an outcome measure, completed by teachers, for elementary age children receiving psychosocial services from community mental health agencies. The major scale formed with these change-sensitive items displayed a larger effect size and an adequate reliability estimate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.