Abstract

We propose an automatic camera calibration method for a side-rear-view monitoring system in natural driving environments. The proposed method assumes that the camera is always located near the surface of the vehicle so that it always shoots a part of the vehicle. This method utilizes photographed vehicle information because the captured vehicle always appears stationary in the image, regardless of the surrounding environment. The proposed algorithm detects the vehicle from the image and computes the similarity score between the detected vehicle and the previously stored vehicle model. Conventional online calibration methods use additional equipment or operate only in specific driving environments. On the contrary, the proposed method is advantageous because it can automatically calibrate camera-based monitoring systems in any driving environment without using additional equipment. The calibration range of the automatic calibration method was verified through simulations and evaluated both quantitatively and qualitatively through actual driving experiments.

Highlights

  • In recent years, vision-based Advanced Driver Assistance Systems (ADAS) based on cameras have been developed continuously to provide safety and convenience to motorists

  • We eliminate outliers to improve the accuracy of Random Sample Consensus (RANSAC), which is inversely can overcome this problem by collecting lots of edge information from multiple images

  • We confirmed that the elements that serve as static edge points inside Reflected-Vehicle Area (RVA) must be photographed for automatic calibration of the side-rear-view monitoring system

Read more

Summary

Introduction

Vision-based Advanced Driver Assistance Systems (ADAS) based on cameras have been developed continuously to provide safety and convenience to motorists. We propose a side-rear-view camera calibration method that is possible even if we do not know any camera parameters It does not require an offline calibration preprocessing step. In this method, a vision-based ADAS camera mounted near the side surface of a vehicle constantly photographs the vehicle. Deep learning requires a huge amount of data based on the type of vehicle, camera parameters, and various driving environments Collecting these data is very inconvenient and difficult. Studied large-scale rotation-invariant template matching [35] This method uses color information, but the color of the RVA changes continuously because it reflects the surrounding environment.

Related Works
Offline Calibration
Online Calibration with Additional Devices
Online Calibration without Additional Devices
Automatic Online Calibration
Reflected-Vehicle Area Detection
Reflected-Vehicle
Example
Flowchart
RVA Comparative Analysis to Estimate Parameters
Process of converting a vehicle
Section 2.
Parameterization
Results
Experiments for Determining An Appropriate Number of Captured Images
Field experiments for quantitative and qualitative evaluation
15. Visualization
Experiments with Various Cameras
Comparison with Previous Methods
Experiments with
Comparison with
Previous methods
21. Images
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call