Abstract

Touch-sensitive devices are becoming increasingly wide-spread, and consequently gestural interfaces have become familiar to the public. Despite the fact that many gestures require frequently dragging, pinching, spreading, and rotating the finger-tips, there currently does not exist a human performance model describing this interaction. In this paper, a novel user performance model is derived for virtual object manipulation on touch-sensitive displays, which involves simultaneous translation, rotation, and scaling of the object. Two controlled experiments with dual-finger unimanual manipulations were conducted to validate the new model. The results indicate that the model fits the experimental data well (with R2 and R values above 0.9), and performs the best among several alternative models. Moreover, based on the analysis of the empirical data, the simultaneity nature of manipulation in the task is explored and several design implications are provided.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call