Abstract

Currently, only a few lane detection methods focus on the dynamic characteristics of a video. In continuous prediction, single-frame detection results produce different degrees of jitter, resulting in poor robustness. We propose a new fast video instance lane detection network, called MT-Net, based on space–time memory and template matching. Memory templates were used to establish feature associations between past and current frames from a local–global perspective to mitigate jitter from scene changes and other disturbances. Moreover, we also investigated the sources and spreading mechanism of memory errors. We designed new query frame and memory encoders to obtain higher-precision memory and query frame features. The experimental results showed that, compared with state-of-the-art models, the proposed model can reduce the number of parameters by 62.28% and the unnecessary jitter and unstable factors in muti-frame lane prediction results by 12.70%, and increases the muti-frame lane detection speed by 1.79. Our proposed methods has obvious advantages in maintaining multi-frame instance lane stability and reducing errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call