Abstract

A fair and consistent evaluation of project reports is one of the most difficult tasks in a course. Even with the use of rubrics, subjectivity is still inherently dominant when assigning a level to each rubric criterion and can even vary between evaluators. By coding and commenting on each report, it is possible to discretize the evaluation into smaller components and to effectively quantify the errors. This would increase the objectification and fairness in the evaluation. This work focuses on the preliminary comparison of using a standard rubric, with analysis of report comments from 21 group reports of a 4th year chemical engineering course. It was found that there was some correlation between the final overall mark assigned through the rubric and the average number of errors per page. The discretization of comments into a quantifiable measure is a promising method to provide a more objective marking scheme.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call