Abstract
A major disappointment with educational evaluative research in general, and health education evaluation in particular, is all too often outcome of "no significant difference" or "no effect." Reviewing the evaluation literature one finds experienced investigators such as Donald Campbell, Marcia Guttentag, and Carol Weiss lamenting the fact that educational programs persistently fail to uncover statistical differences. This problem continues to plague evaluators whose experiential evidence indicates that the program may have produced some very desirable results. Thus, the traditional methods of evaluation are increasingly being called into question. Evaluators are presenting theories to explain the "no effects" phenomenon and alternate methodologies are being proposed in the literature. This article suggests that health educators not only acknowledge the existence of this dilemma, but develop a working knowledge of alternate or qualitative data methodologies, their possibilities, and their limitations, thus encouraging the evaluation of those health programs which have traditionally been classified as undoable.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.