Abstract

Grasping is fundamental in various robotic applications, particularly within industrial contexts. Accurate inference of object properties is a crucial step toward enhancing grasping quality. Dynamic and Active Vision Sensors (DAVIS), increasingly utilized for robotic grasping, offer superior energy efficiency, lower latency, and higher temporal resolution than traditional cameras. However, the data they generate can be complex and noisy, necessitating substantial preprocessing. In response to these challenges, we introduce GraspHD, an innovative end-to-end algorithm that leverages brain-inspired hyperdimensional computing (HDC) to learn about the size and hardness of objects and estimate the grasping force. This novel approach circumvents the need for resource-intensive pre-processing steps, capitalizing on the simplicity and inherent parallelism of HDC operations. Our comprehensive analysis reveals that GraspHD surpasses state-of-the-art approaches in terms of overall classification accuracy. We have also implemented GraspHD on an FPGA to evaluate system efficiency. The results demonstrate that GraspHD operates at a speed 10x faster and offers an energy efficiency 26x higher than existing learning algorithms while maintaining robust performance in noisy environments. These findings underscore the significant potential of GraspHD as a more efficient and effective solution for real-time robotic grasping applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call