Abstract

The two hands and ten fingers that humans are born with are the most dexterous parts of the body and the intrinsic tools that have created our history. To advance human-computer interaction so that we make use of such dexterity, an intuitive interface is highly desirable. Work in this field has led to the era of haptic interaction, yet the advances of haptic hardware and the corresponding software development kits (SDKs) have remained at the singlepoint stage for many years. Compared to the original intention of having multimanual, multifinger haptic devices that support both kinesthetic and tactile feedbacks, single-point kinesthetic haptic devices (SPHs) are a compromise between intuitive interactions and affordability. They are limited for probing-like operations and incapable of performing more sophisticated tasks. This limitation has caused the SPHs to lose much of their value in real-life applications where the dexterity that human hands possess is essential. Another reason for the slow transition from SPHs to multipoint haptic devices (MPHs) is the unique hardware designs and software implementations for MPHs. Unlike SPHs that share a rather similar architecture that can be easily abstracted for common communications, MPHs can be significantly different in their looks and the underlying interaction models, such as a multimanual model for collaborative hand manipulations and a multifinger model for pinch and grasp. The situation becomes even worse when considering incorporating both kinesthetic and tactile feedbacks together for more comprehensive systems, as these two are usually considered separate under existing hardware and SDK implementations, while they actually both belong to haptics and are interdependent on human hands.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call