Abstract

Noise has always been nonnegligible trouble in object detection by creating confusion in model reasoning, thereby reducing the informativeness of the data. It can lead to inaccurate recognition due to the shift in the observed pattern, that requires a robust generalization of the models. To implement a general vision model, we need to develop deep learning models that can adaptively select valid information from multimodal data. This is mainly based on two reasons. Multimodal learning can break through the inherent defects of single-modal data, and adaptive information selection can reduce chaos in multimodal data. To tackle this problem, we propose a universal uncertainty-aware multimodal fusion model. It adopts a multipipeline loosely coupled architecture to combine the features and results from point clouds and images. To quantify the correlation in multimodal information, we model the uncertainty, as the inverse of data information, in different modalities and embed it in the bounding box generation. In this way, our model reduces the randomness in fusion and generates reliable output. Moreover, we conducted a completed investigation on the KITTI 2-D object detection dataset and its derived dirty data. Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation. The experiment results demonstrate the benefits of our adaptive fusion. Our analysis on the robustness of multimodal fusion will provide further insights for future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call