Abstract

Visual 3D perception is a key interface between vehicles and the environment which provides rich perception information for autonomous vehicles. In vehicular visual 3D perception, camera position uncertainty can often degrade the perception and localization performance, and the information coupling and propagation among 3D points and cameras exist ubiquitously. However, these factors do not receive the deserving concern in previous algorithms which mainly focus on the consideration of efficiency. In this paper, we develop a statistical framework for visual 3D perception in vehicular networks from the perspective of information gains. Specifically, we first derive the influence of camera position uncertainty on perception and localization quality with geometric interpretations given by the information ellipsoid, and also present the perception loss caused by uncertainty in the camera deployment application. Then we determine the information coupling and propagation among 3D points and cameras in the vehicular network, propose the visual information graph to characterize the coupling, and derive the hierarchical structure in propagation. Moreover, we also propose the equivalent circuit for the quasi-tree network to interpret the hierarchical structure and derive the blocking and the end-link effects for network reduction. Our results provide guidelines for the development of efficient uncertainty- and coupling-aware visual 3D perception techniques in vehicular networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call