Abstract

Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%.

Highlights

  • Motion segmentation is an important task for applications that involve a moving camera or neuromorphic sensor, in the field of robotics

  • We address the problem of classifying spikes or motion events into foreground and background events by taking advantage of an induced sensor micromotion

  • To compute reliable temporal statistics that are a consequence of micromotions, we introduce the concept of spike-groups

Read more

Summary

Introduction

Motion segmentation is an important task for applications that involve a moving camera or neuromorphic sensor, in the field of robotics. When standard frame-rate cameras are employed, the difference between image frames is the simplest method to detect static or dynamic events (Sobral and Vacavant, 2014). Another technique is to calculate optical flow vectors from consecutive frames to estimate regions of coherent motion (Narayana et al, 2013). Weinland et al (2011) provide a comprehensive survey of strategies used for motion segmentation of dynamic objects using traditional image sensors Standard cameras have intrinsic limitations associated with uneven illumination, computations on each pixel in a frame, blurring of moving objects and limited frame-rate. Weinland et al (2011) provide a comprehensive survey of strategies used for motion segmentation of dynamic objects using traditional image sensors

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.