Abstract

Novelty is a significant factor in evaluating creative aptitude of mass examination in Design education. Novelty is a measure of uniqueness which is computed based on relative comparison among solutions. Presently, novelty assessment of solutions illustrating creative aptitude in mass examination is conducted by domain-specific experts. During the process of evaluation, examiners drill their thought processes and assesses based on their frame of reference. Moreover due to the ever-increasing number of students in these examinations where students compete for admission to Design schools, examiners are confronted with multiple challenges viz., evaluation in stipulated time, evaluation to be conducted in large scale, etc. These difficulties might frustrate examiners and might lead to errors in evaluation. The investigation in this paper is exactly geared towards this issue and we explored whether technology can support examiners in situations like this. Features are extracted for evaluating novelty by human-centred design approach. A model is proposed to evaluate novelty in academic settings specifically in mass examinations. This model is validated by utilizing it in different case studies based on the type of Design solutions in mass examinations. This model is implemented using Deep Learning (DL)-based architectures. Findings emphasize that there is negligible difference in outcome of these architectures and human expert-based evaluation, which confirms the capacity of the devised model. This study would support pedagogues in the evaluation process that are conducted on a large scale. It would also help reducing logistics, time, and man-power in evaluation process. Consistency in subjective assessment would ensure selection of right candidate, thereby increases trust in the assessment system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call