Abstract

Wearable devices have many applications ranging from health analytics to virtual and mixed reality interaction, to industrial training. For wearable devices to be practical, they must be responsive, deformable to fit the wearer, and robust to the user's range of motion. Signals produced by the wearable must also be informative enough to infer the precise physical state or activity of the user. Herein, a fully soft, wearable glove is developed, which is capable of real‐time hand pose reconstruction, environment sensing, and task classification. The design is easy to fabricate using low cost, commercial off‐the‐shelf items in a manner that is amenable to automated manufacturing. To realize such capabilities, resisitive and fluidic sensing technologies with machine learning neural architectures are merged. The glove is formed from a conductive knit which is strain sensitive, providing information through a network of resistance measurements. Fluidic sensing captured via pressure changes in fibrous sewn‐in flexible tubes, measuring interactions with the environment. The system can reconstruct user hand pose and identify sensory inputs such as holding force, object temperature, conductability, material stiffness, and user heart rate, all with high accuracy. The ability to identify complex environmentally dependent tasks, including held object identification and handwriting recognition is demonstrated.

Highlights

  • The human skin is a natural marvel of perception, capable of robustly sensing temperature, pressure, and materials, enabling higher-level environmental reasoning and tactile skill.[1]

  • We present a wearable glove (Figure 1) that incorporates two novel sensing technologies—a resistive sensing architecture and a fluidic sensing architecture

  • Strategic design choices ensure that these readings are mostly disjointed in the aspects of the interaction they capture, and they do not interfere with each other. These readings provide maximum information to be reasoned about by a downstream neural network model. This neural network model is responsible for translating raw signals to task-specific inference, such as hand pose reconstruction, grasped object classification, and so on, and is trained offline in a supervised manner from labeled ground-truth data

Read more

Summary

Introduction

The human skin is a natural marvel of perception, capable of robustly sensing temperature, pressure, and materials, enabling higher-level environmental reasoning and tactile skill.[1] Creating intelligent sensorized skins for applications in soft robotics,[2] interaction and haptic devices,[3] or other intelligent or “see” the world again by providing them with a glove that can sense for them.[4] They could be used to monitor rehabilitation efforts and rates by providing feedback as to grip strength, an indicator of stroke recovery,[6] or, monitor tremors or muscle activity, potential health indicators.[5] In addition, there are many further applications in industrial manufacturing,[7] soft robotic sensing,[8] and mixed reality interfaces.[3]. By merging a novel dual-modality sensing architecture with computational learning models, we have designed a glove that is capable of advanced sensing tasks amenable to real-

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call