Abstract

Many research papers pull data from student surveys. But are those surveys well designed? Are the questions used validated? Are the results comparable across studies? What exactly are we asking our students? In this work, we performed a systematic literature map of the past 15 years of papers in the three main conferences sponsored by the ACM Special Interest Group on Computer Science Education: International Computing Education Research (ICER), Innovation and Technology in Computer Science Education (ITiCSE), and the Special Interest Group on Computer Science Education Technical Symposium (SIGCSE). We search for all papers referring to student surveys or questionnaires. Out of 1313 papers analyzed, 42 papers referred to surveys containing general questions applicable to many or all computer science students. Our analysis showed that many papers were using surveys to extract similar types of information, such as demographics, prior experience or motivation to study computer science. However, the questions were being asked in different ways, using different scales, thus making it difficult or impossible to compare survey results between studies. We further found that while some studies based their questions on well-validated surveys, or at least shared their questions for possible later validation, approximately half of the papers found neither validated their questions, nor shared them to allow for post-hoc validation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call