Abstract
In event-based sensing, many sensors independently and asynchronously emit events when there is a change in their input. Event-based sensing can present significant improvements in power efficiency when compared to traditional sampling, because (1) the output is a stream of events where the important information lies in the timing of the events, and (2) the sensor can easily be controlled to output information only when interesting activity occurs at the input. Moreover, event-based sampling can often provide better resolution than standard uniform sampling. Not only does this occur because individual event-based sensors have higher temporal resolution, it also occurs because the asynchrony of events allows for less redundant and more informative encoding. We would like to explain how such curious results come about. To do so, we use ideal time encoding machines as a proxy for event-based sensors. We explore time encoding of signals with low rank structure, and apply the resulting theory to video. We then see how the asynchronous firing times of the time encoding machines allow for better reconstruction than in the standard sampling case, if we have a high spatial density of time encoding machines that fire less frequently.
Highlights
Many aspects of our lives are governed by routine and rythm: our work days, breathing patterns, or even music
We study the setup in Fig. 1: multiple time encoding machines (TEMs) are used to encode multiple locations in a scene and each TEM outputs a series of spikes
We find that this setup offers interesting tradeoffs in terms of time and space resolution: for once, increasing space resolution can increase time resolution as well, precisely thanks to our blessing in disguise—the asynchrony of spike times within and across outputs of the TEMs
Summary
Many aspects of our lives are governed by routine and rythm: our work days, breathing patterns, or even music. Many engineered systems, such as traditional sampling devices, rely almost exclusively on clocked behavior These sampling schemes are powerful - they govern how we record music, take images, transfer information - but they fail to adapt their activity to the varying complexity of the input. Event-based sensing is growing in popularity [2]–[5] The output of such a sensor is a series of spikes which are characterized by their timing rather than their amplitude, as is the case with traditional sampling [6]. We find a Nyquist-like criterion on the number of spikes needed for reconstruction, requiring as many linearly independent constraints as degrees of freedom Using this formulation, the video recording problem with TEMs turns into a parametric estimation problem. When using time encoding, spike times are—by design—almost surely different and this difference comes at no extra cost
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have