Recent advances in event-based cameras have led to significant developments in robotics, particularly in visual simultaneous localization and mapping (VSLAM) applications. This technique enables real-time camera motion estimation and simultaneous environment mapping using visual sensors on mobile platforms. Event cameras offer several distinct advantages over frame-based cameras, including a high dynamic range, high temporal resolution, low power consumption, and low latency. These attributes make event cameras highly suitable for addressing performance issues in challenging scenarios such as high-speed motion and environments with high-range illumination. This review paper delves into event-based VSLAM (EVSLAM) algorithms, leveraging the advantages inherent in event streams for localization and mapping endeavors. The exposition commences by explaining the operational principles of event cameras, providing insights into the diverse event representations applied in event data preprocessing. A crucial facet of this survey is the systematic categorization of EVSLAM research into three key parts: event preprocessing, event tracking, and sensor fusion algorithms in EVSLAM. Each category undergoes meticulous examination, offering practical insights and guidance for comprehending each approach. Moreover, we thoroughly assess state-of-the-art (SOTA) methods, emphasizing conducting the evaluation on a specific dataset for enhanced comparability. This evaluation sheds light on current challenges and outlines promising avenues for future research, emphasizing the persisting obstacles and potential advancements in this dynamically evolving domain.
Read full abstract