Abstract

High-speed imaging can help us understand some phenomena that are too fast to be captured by our eyes. Although ultra-fast frame-based cameras (e.g., Phantom) can record millions of fps at reduced resolution, they are too expensive to be widely used. Recently, a retina-inspired vision sensor, spiking camera, has been developed to record external information at 40, 000 Hz. The spiking camera uses the asynchronous binary spike streams to represent visual information. Despite this, how to reconstruct dynamic scenes from asynchronous spikes remains challenging. In this paper, we introduce novel high-speed image reconstruction models based on the short-term plasticity (STP) mechanism of the brain, termed TFSTP and TFMDSTP. We first derive the relationship between states of STP and spike patterns. Then, in TFSTP, by setting up the STP model at each pixel, the scene radiance can be inferred by the states of the models. In TFMDSTP, we use the STP to distinguish the moving and stationary regions, and then use two sets of STP models to reconstruct them respectively. In addition, we present a strategy for correcting error spikes. Experimental results show that the STP-based reconstruction methods can effectively reduce noise with less computing time, and achieve the best performances on both real-world and simulated datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call