AbstractThe precise perception of the surrounding environment in traffic scenes is an important part of an intelligent transportation system. The event camera could provide complementary information to traditional frame‐based cameras, such as high dynamic range, and high time resolution, in the perception of traffic targets. To improve the precision and reliability of perception as well as facilitate lots of RGB camera‐based studies introduced to event cameras directly, a refined registration method for event‐based cameras and RGB cameras on the basis of pixel‐level region segmentation is proposed, to provide a fusion method at pixel level. A total of eight sequences and a dataset containing 260 typical traffic scenes are contained in the experiment dataset, both selected from DSEC, a traffic event‐based dataset. The registered event image shows a better spatial consistency with RGB images visually. Compared to the baseline, the evaluation indicators, such as the performance of the contrast, the proportion of overlapping pixels, and average registration accuracy have been improved. In the traffic object segmentation task, the average boundary displacement error of our method has decreased and the max decline value has reached 79.665%, compared to the boundary displacement error between ground truth and baseline. These results indicate prospective applications in the perception of intelligent transportation systems combined with event and RGB cameras. The traffic dataset with pixel‐level semantic annotations will be provided soon.
Read full abstract