Abstract

Dynamic obstacle avoidance is crucial in autonomous driving, ensuring vehicle safety by preventing collisions and enhancing driving efficiency. Dynamic obstacle avoidance algorithms have made significant progress due to deep learning. However, video-based target detection methods can suffer from missed or false detections when processing consecutive frames, especially for high-speed moving targets or in complex dynamic scenes. Multi-target tracking methods require intricate algorithm designs for target initialization and occluded object recovery, which can be compromised by tracker performance, leading to unstable tracking or target loss. To address the issue of target loss in multi-target tracking, we designed the novel YTCN model that infuses time series information through temporal convolution and enhances the sensitivity of the receptive field with Spatial Pyramid Pooling, Feature Concatenate and Spatial Convolution(SPPFCSPC), enhancing the model’s feature extraction capability. Simultaneously, we developed the novel Global Attention Mechanism(GAM) and Double Attention(DA) mechanisms that merge channel and spatial features to enhance feature representation. Finally we design the Temporal Residual Block(TRB) to model the temporal obstacles. Experimental results show that our method achieves 82.4 % of mAP_0.5 on the BDD 100 K dataset, which gets 1.3 % higher than the previous methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.