Abstract

Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based camera sets a limitation on continuous visual feedback due to their low sampling rate and redundant data in real-time image processing, especially in the case of high-speed tasks. Event cameras give human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution ($1\mu s$) with low latency and wide dynamic range. In this paper, we present a visual servoing method using an event camera and a switching control strategy to explore, reach and grasp to achieve a manipulation task. We devise three surface layers of active events to directly process stream of events from relative motion. A purely event based approach is adopted to extract corner features, localize them robustly using heat maps and generate virtual features for tracking and alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in spatio-temporal space. The controller switches its strategy based on the sequence of operation to establish a stable grasp. The event based visual servoing (EVBS) method is validated experimentally using a commercial robot manipulator in an eye-in-hand configuration. Experiments prove the effectiveness of the EBVS method to track and grasp objects of different shapes without the need for re-tuning.

Highlights

  • In robotics, visual servoing is a well-studied research topic [1]–[3] and a well-known real-time technique to control the motion of a robot using continuous visual feedback

  • Visual servoing have been deployed in robotic manipulators [11], unmanned ground vehicles (UGV) [12], unmanned ariel vehicles (UAV) [13], [14], unmanned underwater vehicles (UUV) [15], space robots [16], human-robot interaction (HRI) [17] and multi-robot systems (MRS) [18]

  • CONTRIBUTIONS Similar to image-based visual servoing (IBVS) approaches but in the line of event-based vision research, we present an event-based visual servoing (EBVS) method that adopts an eye-in-hand configuration and processes event stream to control the motion of the robot manipulator

Read more

Summary

INTRODUCTION

Visual servoing is a well-studied research topic [1]–[3] and a well-known real-time technique to control the motion of a robot using continuous visual feedback. B. CONTRIBUTIONS Similar to IBVS approaches but in the line of event-based vision research, we present an event-based visual servoing (EBVS) method that adopts an eye-in-hand configuration and processes event stream to control the motion of the robot manipulator. The primary contributions of this paper are summarized: 1) For the first time, we present a purely event-based visual servoing method using a neuromorphic camera in an eye-in-hand configuration for the grasping pipeline of a robotic manipulator. Instead of a frame-based camera, an event camera is mounted on the robot’s end-effector maintaining the relative position with the vacuum gripper Such setting offers flexibility in viewing the workspace and high precision for grasping objective. The step-by-step processing of events, control law and switching strategy is detailed in the following

EVENT PROCESSING
EVENT-BASED FEATURE DETECTION
EVENT-BASED FEATURE TRACKING
GRIPPER ALIGNMENT TO GRASP
Initialize switching strategy
EXPERIMENTAL SETUP AND PROTOCOL
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call