Abstract

Automotive targets undergoing turns in road junctions offer large synthetic apertures over short dwell times to automotive radars that can be exploited for obtaining fine cross-range resolution. Likewise, the wide bandwidths of the automotive radar signal yield high-range resolution profiles. Together, they are exploited for generating inverse synthetic aperture radar (ISAR) images that offer rich information regarding the target vehicle's size, shape, and trajectory which is useful for object recognition and classification. However, a key requirement for ISAR is translation motion compensation and estimation of the turning velocity of the target. State-of-the-art algorithms for motion compensation trade-off between computational complexity and accuracy. An alternative low complexity method is to use an additional sensor for tracking the target motion. In this work, we propose to exploit computer vision algorithms to identify the radar target object in the sensor field-of-view (FoV) with high accuracy. Further, we propose to track the target vehicle's motion through fusion of vision and radar data. Vision data facilitates the accurate estimation of the lateral position of the target and its lateral velocity which complements the radar's capability of accurate estimation of range and radial velocity. Through simulations and experimental evaluations with a monocular camera and Texas Instrument's millimeter wave radar, we demonstrate the effectiveness of sensor fusion for accurate target tracking for translational motion compensation and generation of high quality ISAR images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call