Abstract

ABSTRACTThis article presents a computational model that predicts finger-drag gesture performance on touchscreen devices, by integrating the queueing network (QN) cognitive architecture and motion tracking. Specifically, the QN-based model was developed to predict two execution times: the finger movement time of drag-gesture (i.e., only the motion time of the finger touched and dragged on the surface of touchscreen) and the comprehensive process time of drag-gesture (i.e., the entire process time to complete the finger-drag task, including visual attention shift, memory storage and retrieval, and hand-finger movements). To develop predictive models for the finger movement time of drag-gesture, 11 participants’ motion data were collected and a regression analysis with parameters of hand-finger anthropometric data and eight angular directions was conducted. Human subject data from our previous study (Jeong & Liu, 2017a) were used to evaluate the QN-based model, generating similar outputs (R2 was more than 80% and root-mean square was less than 300 msec) for both execution times.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.