Abstract

Reform efforts in statistics education emphasize the need for students to develop statistical thinking. Critical to this goal is a solid understanding of design in the process of collecting data, evaluating evidence, and drawing conclusions. We collected survey responses from over 700 college students at the start of an introductory statistics course to determine how they evaluated the validity of different designs. Despite preferring different designs, students offered a variety of productive arguments supporting their choices. For example, some students viewed intervention as a weakness that disrupted the ability to generalize results, whereas others viewed intervention as critical for identifying causality. Our results highlight that instruction should frame design as the balancing of different priorities: namely causality, generalizability, and power.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call