Abstract

ABSTRACTLike many youth development programs, most youth mentoring programs do not have prescribed practices that target specific outcomes. Because the construct of mentoring represents a broad range of potential activities, researchers face a conundrum when making generalizable causal inferences about the effects of this and similar services. On the one hand, researchers cannot make valid experimental inferences if they do not describe what they manipulate. On the other hand, experiments that include prescribed protocols do not generalize to most mentoring programs. In most cases, researchers conducting school-based mentoring program evaluations err on the side of not sufficiently specifying treatment constructs, which limits the field’s ability to make practically or theoretically useful inferences about this service. We discuss this reality in light of the fundamental logic of the experimental design and suggest several possible solutions to this conundrum. Our goal is to empower researchers to adequately specify treatments while still preserving the treatment construct validity of this and similar interventions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call