Event cameras are bio‐inspired sensors that offer advantages over traditional cameras. They operate asynchronously, sampling the scene at microsecond resolution and producing a stream of brightness changes. This unconventional output has sparked novel computer vision methods to unlock the camera's potential. Here, the problem of event‐based stereo 3D reconstruction for SLAM is considered. Most event‐based stereo methods attempt to exploit the high temporal resolution of the camera and the simultaneity of events across cameras to establish matches and estimate depth. By contrast, this work investigates how to estimate depth without explicit data association by fusing disparity space images (DSIs) originated in efficient monocular methods. Fusion theory is developed and applied to design multi‐camera 3D reconstruction algorithms that produce state‐of‐the‐art results, as confirmed by comparisons with four baseline methods and tests on a variety of available datasets.
Read full abstract