Accurate localization is an important component for the vehicle’s autonomous navigation. The appearance of the moving objects may lead to feature-matching error with the map features, thereby causing a serious decline of localization accuracy. Neuromorphic vision sensor (NeuroIV) is a kind of dynamic vision sensor, with the properties of high temporal resolution, movement capture, and lightweight computation. In view of this, this research proposes to combine the NeuroIV and LIDAR points to acquire the static landmark features and robust navigation localization. However, as a younger and smaller research field compared to RGB computer vision, NeuroIV vision is rarely associated with the intelligent vehicle. For this purpose, we built a novel dataset recorded by NeuroIV sensor, and a state-of-the-art YOLO-small network is designed to detect the moving objects with the dataset. In order to completely deduct the whole dynamic zones, a sensors’ novel fusion model is built by the zones’ segmentation and matching, so the LIDAR’s static environment is obtained completely by the remained points. By evaluating different types of LIDAR points, the feature-matching error can be alleviated further, making the localization is more accurate. Together with qualitative and quantitative results, this work provides a moving objects’ detection improvement of 14.13 % mAP with the new NeuroIV dataset, and an obvious localization accuracy improvement with LIDAR points’ evaluation.