Abstract

Vision based-target tracking ability is crucial to bio-inspired snake robots for exploring unknown environments. However, it is difficult for the traditional vision modules of snake robots to overcome the image blur resulting from periodic swings. A promising approach is to use a neuromorphic vision sensor (NVS), which mimics the biological retina to detect a target at a higher temporal frequency and in a wider dynamic range. In this study, an NVS and a spiking neural network (SNN) were performed on a snake robot for the first time to achieve pipe-like object tracking. An SNN based on Hough Transform was designed to detect a target with an asynchronous event stream fed by the NVS. Combining the state of snake motion analyzed by the joint position sensors, a tracking framework was proposed. The experimental results obtained from the simulator demonstrated the validity of our framework and the autonomous locomotion ability of our snake robot. Comparing the performances of the SNN model on CPUs and on GPUs, respectively, the SNN model showed the best performance on a GPU under a simplified and synchronous update rule while it possessed higher precision on a CPU in an asynchronous way.

Highlights

  • Target tracking performed on mobile robots, such as bio-inspired snake robots, remains a challenging research topic

  • We presented a pipe-like object detecting and autonomous tracking framework, which was performed on our wheel-less snake robot with a monocular Dynamic Vision Sensor (DVS) camera by applying a spiking neural network which is inspired by the Hough transform (Wiesmann et al, 2012; Seifozzakerini et al, 2016)

  • The event sequences generated by the DVS were fed into the vision spiking neural network (SNN)

Read more

Summary

Introduction

Target tracking performed on mobile robots, such as bio-inspired snake robots, remains a challenging research topic. By generating blur templates of the target from blur-free frames, the target is represented by a sparse matrix and tracked by a particle filter (Wu et al, 2011; Ma et al, 2016). These frameworks are blurtolerant, they are still time-consuming. Hu et al (2009) designed a vision-based autonomous robotic fish and implemented red-ball tracking. This method cannot be used in a complex environment or for objects with low color contrast. Researchers have attempted various new types of vision sensors in target tracking, such as structured light sensors (Ponte et al, 2014) and neuromorphic vision sensors (NVS) (Schraml et al, 2010; Glover and Bartolozzi, 2016; Liu et al, 2016; Moeys et al, 2016; Seifozzakerini et al, 2016)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.