In this paper, we present EC-WAMI, the first successful application of neuromorphic event cameras (ECs) for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous pixel-level brightness changes, offer key advantages over traditional frame-based sensors such as high temporal resolution, low power consumption, and resilience to dynamic lighting. These capabilities allow ECs to overcome challenges such as glare, uneven lighting, and low-light conditions that are common in aerial imaging and remote sensing, while also extending UAV flight endurance. To evaluate the effectiveness of ECs in WAMI, we simulate event data from RGB WAMI imagery and integrate them into SfM pipelines for camera pose optimization and 3D point cloud generation. Using two state-of-the-art SfM methods, namely, COLMAP and Bundle Adjustment for Sequential Imagery (BA4S), we show that although ECs do not capture scene content like traditional cameras, their spike-based events, which only measure illumination changes, allow for accurate camera pose recovery in WAMI scenarios even in low-framerate(5 fps) simulations. Our results indicate that while BA4S and COLMAP provide comparable accuracy, BA4S significantly outperforms COLMAP in terms of speed. Moreover, we evaluate different feature extraction methods, showing that the deep learning-based LIGHTGLUE descriptor consistently outperforms traditional handcrafted descriptors by providing improved reliability and accuracy of event-based SfM. These results highlight the broader potential of ECs in remote sensing, aerial imaging, and 3D reconstruction beyond conventional WAMI applications. Our dataset will be made available for public use.
Read full abstract