Abstract
Usually, the detection process of traffic objects, such as vehicles, finds the visual input lacks the necessary depth information, so it is difficult to directly and quickly obtain the results. Typically, to separate the objects from the complex background, it is necessary to utilize a complex model or prior knowledge, which can be computationally expensive or simply infeasible. To battle this issue, depth visual information is used in this paper to accurately segment roads and vehicles, so it doesn’t need to use complex models to detect objects in the visual input. First, an unsupervised deep learning-based monocular depth estimation method is used to obtain the stereo disparity map. Then a non-parametric, refined U-V disparity mapping method is used to obtain the road region of interest. Next, this paper uses the road parallel scanning to determine the source and vanishing points and uses the adjacent disparity similarity algorithm to complement and extract the target region to detect roads and vehicles. This algorithm uses multi-feature fusion such as height-width ratio, perspective ratio, and area ratio to accurately segment the target region, and the effectiveness of the proposed method is tested on a public dataset. The experimental results show that the proposed model can accurately and efficiently detect roads and vehicles in a variety of scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Intelligent Transportation Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.