AbstractInspired by biological vision mechanism, event‐based cameras have been developed to capture continuous object motion and detect brightness changes independently and asynchronously, which overcome the limitations of traditional frame‐based cameras. Complementarily, spiking neural networks (SNNs) offer asynchronous computations and exploit the inherent sparseness of spatio‐temporal events. Notably, event‐based pixel‐wise optical flow estimations calculate the positions and relationships of objects in adjacent frames; however, as event camera outputs are sparse and uneven, dense scene information is difficult to generate and the local receptive fields of the neural network also lead to poor moving objects tracking. To address these issues, an improved event‐based self‐attention optical flow estimation network (SA‐FlowNet) that independently uses criss‐cross and temporal self‐attention mechanisms, directly capturing long‐range dependencies and efficiently extracting the temporal and spatial features from the event streams is proposed. In the former mechanism, a cross‐domain attention scheme dynamically fusing the temporal‐spatial features is introduced. The proposed network adopts a spiking‐analogue neural network architecture using an end‐to‐end learning method and gains significant computational energy benefits especially for SNNs. The state‐of‐the‐art results of the error rate for optical flow prediction on the Multi‐Vehicle Stereo Event Camera (MVSEC) dataset compared with the current SNN‐based approaches is demonstrated.
Read full abstract