Abstract

Designing visual models to describe and conceptualize objects and systems requires abstraction skills and a predisposition for visual interactions. Readily available modeling tools rely on the users’ logical-mathematical and visual-spatial abilities to support modeling design. However, they fall short of mechanisms to tap into the users’ bodily-kinesthetic abilities. This research presents a model-driven framework to automatically develop visual editors to work with Domain Specific Languages in tangible interaction environments. The framework is illustrated through the development of an editor of entity-relationship models supported by augmented reality. The editor usability evaluation indicates good acceptance by users as well as potential to support alternative interactions and to learn database concepts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call