Abstract

This pilot study examines how students’ performance has evolved in an Object-oriented (OO) programming course and contributes to the learning analytic framework for similar programming courses in university curriculum. First, we briefly introduce the research background, a novel OO teaching practice with consecutive and iterative assignments consisting of programming and testing assignments. We propose a planned quantitative method for assessing students’ gains in terms of programming performance and testing performance. Based on real data collected from students who engaged in our course, we use trend analysis to observe how students’ performance has improved over the whole semester. By using correlation analysis, we obtain some interesting findings on how students’ programming performance correlates with testing performance, which provides persuasive empirical evidence in integrating software testing practices into an Object-oriented programming curriculum. Then, we conduct an empirical study on how students’ design competencies are represented by their program code quality changes over consecutive assignments by analyzing their submitted source code in the course system and the GitLab repository. Three different kinds of profiles are found in the students’ program quality in the OO design level. The group analysis results reveal several significant differences in their programming performance and testing performance. Moreover, we conduct systematical explanations on how students’ programming skill improvement can be attributed to their object-oriented design competency. By performing principal component analysis on software statistical data, a predictive OO metrics suite for both students’ programming performance and their testing performance is proposed. The results show that these quality factors can serve as useful predictors of students’ learning performance and can provide effective feedback to the instructors in the teaching practices.

Highlights

  • In the context of improving students’ competence of developing high-quality software, a large volume of work in computer education has been discussed to infuse software engineering issues in the related undergraduate curriculum [1]

  • In this paper, we report learning analytic in our OO programming teaching practice and based on real participation data in continuous, formative assessment, we present the main contributions on (1) conducting a deep observation and learning analytic on how students’ performance in different stages of the course varies and interacts as the course progresses, (2) proposing a planned quantitative approach on predicting students’ program quality and explaining their formative gains, and (3) we contribute to current educational research in related programming course by providing a practical application of such an assessment infrastructure based on fine-grained learning data

  • It is noteworthy that the parameter value in Comp2 (NOC, Number of overridden methods (NDM), Depth of Inheritance (DIT)) is negatively correlated (b = −0.569) with his testing performance represented by Positive Competitive Test Effective (PoCTE)

Read more

Summary

INTRODUCTION

In the context of improving students’ competence of developing high-quality software, a large volume of work in computer education has been discussed to infuse software engineering issues in the related undergraduate curriculum [1]. The OO design competence, as a parallel teaching goal of the course, should be investigated, interpreting students’ programming performance and testing performance on the software quality attributes of their program code. In this study, to address students’ performance in programming assignments, their submitted program code quality is usually assessed in terms of the external attribute of correctness in priority, and we first observe the quality of students’ program code according to testing results We use both common testing and competitive testing to validate the submissions assignments. LCOM can indicate the internal quality attribute of cohesion while inheritance can be reflected by the metrics of DIT, NDM and NOC [36] All of these metrics are measured by the analysis tool on the students’ submitted program code in the GitLab repository automatically. We discuss the regression coefficient for each component according to regression modeling techniques, wherein the dependent variable (the target) and independent variable (the predictor) relationship is studied

ANALYSES AND DISCUSSION
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call