Abstract

Generalizability theory was used to examine the generalizability and dependability of outcomes from two single-item Direct Behavior Rating (DBR) scales: DBR of actively manipulating and DBR of visually distracted. DBR is a behavioral assessment tool with specific instrumentation and procedures that can be used by a variety of service delivery providers (e.g., teacher, teacher aide, parent, etc.) to collect time-series data on student behavior. The purpose of this study was to extend the findings presented by Chafouleas et al. with an examination of DBR outcomes as they are generalized across raters and rating occasions. One hundred twenty-five undergraduates viewed and rated student behavior on video clips while the children engaged in an unsolvable Lego puzzle task. A series of decision studies were used to evaluate the effects of alternate assessment conditions (variable numbers of raters and rating occasions) and interpretive assumptions (definitions of the universe of generalization). Results support the general conclusion that ratings from individual or small groups of simultaneous raters, when generalized only to that specific individual or group of individuals, can approach reliability criteria for low- and high-stakes decisions. Implications are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call