Abstract

Unmanned ground vehicles (UGV) and self-driving cars utilize visual sensors including cameras, Lidars and radars not only for localization and obstacle avoidance purposes but also to generate a 3D map of the surroundings. When an emergency vehicle — such as a fire truck or an ambulance — is approaching, self-driving cars are required to modify their path plan and find a safe spot rapidly. However early detection of a fast approaching emergency vehicle in urban environment is challenging with a visual perception system since it requires direct view without an obstacle in between. To improve the safety of self-driving cars, a localization algorithm is required to maximize the path modification time constraint as well as to minimize location and direction detection time, especially at an intersection in urban environments. To overcome this challenge, we mounted a transducer array on top of a mobile robot and applied beam forming algorithms to predict the location and velocity vector of the remote dynamic vehicle. Even with high uncertainty, this strategy improved time requirement of occupancy grid update which marks all possible unsafe areas to avoid a collision. Two experimental setups of controlled and uncontrolled environments were prepared. Followed by preliminary transducer characteristic analysis in an anechoic chamber, an outdoor experiment with two mobile robots are executed to benchmark the capability of signal processing techniques while both source and observer are in motion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call