Abstract

This paper presents a neuromorphic dual-line vision sensor and signal-processing concepts for object recognition and classification. The system performs ultrahigh speed machine vision with a compact and low-cost embedded-processing architecture. The main innovation of this paper includes efficient edge extraction of moving objects by the vision sensor on pixel level and a novel concept for real-time embedded vision processing based on address-event data. The proposed system exploits the very high temporal resolution and the sparse visual-information representation of the event-based vision sensor. The 2 × 256 pixel dual line temporal-contrast vision sensor asynchronously responds to relative illumination-intensity changes and consequently extracts contours of moving objects. This paper shows data-volume independence from object velocity and evaluates the data quality for object velocities of up to 40 m/s (equivalent to up to 6.25 m/s on the sensor's focal plane). Subsequently, an embedded-processing concept is presented for real-time extraction of object contours and for object recognition. Finally, the influence of object velocity on high-performance embedded computer vision is discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call