Abstract

Typing on a smartwatch is challenging because of the fat-finger problem. Rising to the challenge, we present a soft keyboard for ultrasmall touch screen devices with efficient visual feedback integrated with autocorrection and prediction techniques. After exploring the design space to support efficient typing on smartwatches, we designed a novel and space-saving text entry interface based on an in situ decoder and prediction function that can run in real-time on a smartwatch such as LG Watch Style. We outlined the details implemented through performance optimization techniques and released interface code, APIs, and libraries as open source. We examined the design decisions with the simulations and studied the visual feedback methods in terms of performance and user preferences. The experiment showed that users could type more accurately and quickly on the target device with our best-performing visual feedback design and implementation. The simulation result showed that the single word suggestion could yield a sufficiently high hit ratio using the optimized word suggestion algorithm.

Highlights

  • When using a smartwatch, it is often necessary to enter text, such as a quick reply to a text message or adding a reminder or an event

  • We evaluate potential visual feedback methods for a typed character

  • Because screen assets are limited in ultrasmall screen devices, design decisions focus on minimizing the screen space required for the QWERTY layout and control keys, ensuring that the remaining screen area for information display is as large as possible

Read more

Summary

INTRODUCTION

It is often necessary to enter text, such as a quick reply to a text message or adding a reminder or an event. Little attention has been paid to the design and evaluation of visual feedback techniques, such as displaying input characters for statistical decoder-based text entry on ultrasmall touchscreen devices. With a robust autocorrection mechanism, the wrong character input can be corrected without using the back key when users continue to enter the following characters correctly In this regard, we design a visual feedback method for character prediction to help the user understand the decoder’s corrective behavior.

RELATED WORK
IMPLEMENTATION
EXPERIMENT 1
EXPERIMENT 2
SIMULATION 1
SIMULATION 2
Findings
VIII. CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.