Abstract

This paper explores the representation of vehicle lights in computer vision and its implications for various pattern recognition tasks in autonomous driving. Different representations for vehicle lights, including bounding boxes, center points, corner points, and segmentation masks, are discussed in terms of their strengths and weaknesses toward a variety of domain tasks, as well as associated data collection and annotation challenges. This leads to the introduction of the LISA Vehicle Lights Dataset, providing light annotations related to position, state, color, and signal, specifically designed for downstream applications in vehicle detection, intent and trajectory prediction, and safe path planning. A comparison of existing vehicle light datasets is provided, highlighting the unique features and limitations of each dataset. Because occlusions from vehicle pose and passing objects can limit camera observation, we introduce a group of Light Visibility neural networks, which take as input a detected vehicle image and return as output whether a corresponding vehicle light is present in the image. This is especially important for the evaluation of light localizations, states, and signals, as systems decisions should account for differences of lights being in unknown states due to occlusion versus model uncertainty. We show that our trained Light Visibility Models achieve over 90% accuracy on each of the four light classes. Our dataset and model are made available at https://cvrr.ucsd.edu/vehicle-lights-dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call