Abstract

Flexible electronics such as tactile cognitive sensors have been broadly adopted in soft robotic manipulators to enable human-skin-mimetic perception. To achieve appropriate positioning for randomly distributed objects, an integrated guiding system is inevitable. Yet the conventional guiding system based on cameras or optical sensors exhibits limited environment adaptability, high data complexity, and low cost effectiveness. Herein, a soft robotic perception system with remote object positioning and a multimodal cognition capability is developed by integrating an ultrasonic sensor with flexible triboelectric sensors. The ultrasonic sensor is able to detect the object shape and distance by reflected ultrasound. Thereby the robotic manipulator can be positioned to an appropriate position to perform object grasping, during which the ultrasonic and triboelectric sensors can capture multimodal sensory information such as object top profile, size, shape, hardness, material, etc. These multimodal data are then fused for deep-learning analytics, leading to a highly enhanced accuracy in object identification (∼100%). The proposed perception system presents a facile, low-cost, and effective methodology to integrate positioning capability with multimodal cognitive intelligence in soft robotics, significantly expanding the functionalities and adaptabilities of current soft robotic systems in industrial, commercial, and consumer applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call