Abstract

Currently, only a few lane detection methods focus on the dynamic characteristics of a video. In continuous prediction, single-frame detection results produce different degrees of jitter, resulting in poor robustness. We propose a new fast video instance lane detection network, called MT-Net, based on space–time memory and template matching. Memory templates were used to establish feature associations between past and current frames from a local–global perspective to mitigate jitter from scene changes and other disturbances. Moreover, we also investigated the sources and spreading mechanism of memory errors. We designed new query frame and memory encoders to obtain higher-precision memory and query frame features. The experimental results showed that, compared with state-of-the-art models, the proposed model can reduce the number of parameters by 62.28% and the unnecessary jitter and unstable factors in muti-frame lane prediction results by 12.70%, and increases the muti-frame lane detection speed by 1.79. Our proposed methods has obvious advantages in maintaining multi-frame instance lane stability and reducing errors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.