Abstract

The transformation of the existing urban environment in digital smart cities has become a reality, which aimed to transform daily life activities into automated processes for the ease in human efforts and reduction in effort time. Vision based sensors are commonly used for monitoring cities, which acquire a huge amount of diverse data and store them for further computer vision processing. In this article, we aim to explore whether and how to navigate a vehicle using cost effective means (vision based sensors) in smart cities without using the calibrated sensors and Global Positioning System (GPS). Vehicle localization and navigation require on-board calibrated sensors and reliable GPS link. In an urban environment, these sensors fail to perform well in: indoor environment (tunnels), crowded and congested areas, and severe weather conditions. The most effective technique used for vision based navigation depends on image registration. The challenges of a successful and effective registration are: sufficient illumination in the environment, dominance of static scene over moving objects, high textured ratio to allow apparent motion and necessary scene overlap between consecutive frames. We proposed a novel approach for vehicle navigation based on vision sensors using modified normalized phase correlation. In the proposed approach, the distinction between textured and texture less surface is based on the identification of corresponding features. In this regard, the Gram polynomial basis function is used to remove the Gibbs error problem generated due to peak in the registration process. Similarly, entropy based tensor approximation is used to remove outliers for robust image registration. Experiments performed in real time during test drives show excellent results with respect to estimated position accuracy in comparison with GPS calculated data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call