Abstract

Science learning is inherently multimodal, with students utilizing both drawings and writings to explain observations of physical phenomena. As such assessments in science should accommodate the many ways students express their understanding, especially given evidence that understanding is distributed across both drawing and writing. In recent years advanced automated assessment techniques that evaluate expressive student artifacts have emerged. However, these techniques have largely operated individually, each considering only a single mode. We propose a framework for the multimodal automated assessment of students’ writing and drawing to leverage the synergies inherent across modalities and create a more complete and accurate picture of a student's knowledge. We introduce a multimodal assessment framework as well as two computational techniques for automatically analyzing student writings and drawings: a convolutional neural network-based model for assessing student writing, and a topology-based model for assessing student drawing. Evaluations with elementary students’ writings and drawings collected with a tablet-based digital science notebook demonstrate that 1) each of the framework's two modalities provide an independent and complementary measure of student science learning, and 2) the computational methods are capable of accurately assessing student work from both modalities and offer the potential for integration in technology-rich learning environments for real-time formative assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call