Abstract

In a connected vehicle environment based on vehicle-to-vehicle (V2V) technology, images from front and ego vehicles are fused to augment a driver’s or autonomous system’s visual field, which is helpful in avoiding road accidents by eliminating the blind point (the objects occluded by vehicles), especially tailgating in urban areas. Realizing multi-view image fusion is a tough problem without knowing the relative location of two sensors and the fusing object is occluded in some views. Therefore, we propose an image geometric projection model and a new fusion method between neighbor vehicles in a cooperative way. Based on a 3D inter-vehicle projection model, selected feature matching points are adopted to estimate the geometric transformation parameters. By adding deep information, our method also designs a new deep-affine transformation to realize fusing of inter-vehicle images. Experimental results on KIITI (Karlsruhe Institute of Technology and Toyota Technological Institute) datasets are shown to validate our algorithm. Compared with previous work, our method improves the IoU index by 2~3 times. This algorithm can effectively enhance the visual perception ability of intelligent vehicles, and it will help to promote the further development and improvement of computer vision technology in the field of cooperative perception.

Highlights

  • Citing the Global status report on road safety, 2021, 1.3 million people die each year as a result of numerous road traffic crashes, and an estimated 50 million people suffer nonfatal injuries [1]

  • We propose the cooperative visual augmentation algorithm based on V2V technology

  • We propose a cooperative visual augmentation algorithm for occluded objects in connected vehicle environments

Read more

Summary

Introduction

Citing the Global status report on road safety, 2021, 1.3 million people die each year as a result of numerous road traffic crashes, and an estimated 50 million people suffer nonfatal injuries [1]. The statistics from NHTSA show that 30~50% of traffic accidents are due to rear-end collisions [2,3]. Such a scenario might occur when unforeseen circumstances cause a leading vehicle to brake suddenly [4]. Studies report that an extra 0.5 s warning time can avoid collisions by 60% and it can be improved to. It is obvious that the risk can be reduced if the forward vehicle’s images can be fused with the host vehicle’s images to enhance the driver’s or auto-driving system’s visual perception ability. The cooperative visual augmentation algorithm based on V2V will be a key part of the advanced driver assistant systems supporting drivers (ADAS) or autonomous driving system to prevent potential hazards

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call