Abstract

Long-tail effect, characterized by highly frequent occurrence of normal scenarios and the scarce appearance of extreme “long-tail” scenarios, ubiquitously exists in the vision-related problems in the real-world applications. Though many computer vision methods to date have already achieved feasible performance for most of the normal scenarios, it is still challenging for existing vision systems to accurately perceive the long-tail scenarios. This deficiency largely hinders the practical application of computer vision systems, since long-tail problems may incur fatal consequences, such as traffic accidents, taking the vision systems of autonomous vehicles as an example. In this paper, we firstly propose a theoretical framework named Long-tail Regularization (LoTR), for analyzing and tackling the long-tail problems in the vision perception of autonomous driving. LoTR is able to regularize the scarcely occurred long-tail scenarios to be frequently encountered. Then we present a Parallel Vision Actualization System (PVAS), which consists of closed-loop optimization and virtual-real interaction, to search for challenging long-tail scenarios and produce large-scale long-tail driving scenarios for autonomous vehicles. In addition, we introduce how to perform PVAS in Intelligent Vehicle Future Challenge of China (IVFC), the most durable autonomous driving competition around the world. Results over the past decade demonstrate that PVAS can effectively guide the collection of long-tail data to diminish the cost in the real world, and thus promote the capability of vision systems to adapt to complex environments, alleviating the impact of long-tail effect.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call