Abstract
Target localization is an important task for a broad range of ubiquitous computing applications. A great deal of previous work has focused on applying advanced machine learning methods to address image-based target detection and localization problems. Some work has also explored the localizing capabilities of radar signals that describe target distances and velocities. This paper proposes a novel machine learning architecture, SoliNet, to localize target positions from radar images alone. Radar data were acquired by Google’s Soli sensors and processed to generate range-Doppler images describing the motion of a radar target. Corresponding camera images of the target were used as validation data. Over 60 experiments including six unique target trajectories were performed using a robotic actuator rig. The SoliNet machine learning architecture was then trained to accomplish localization using the known location of the target over time as dictated by the programmed robot trajectory. SoliNet is an end-to-end trained combination of deep convolutional and recurrent neural networks. Experimental results showed an overall accuracy of 98.6% and equivalent effectiveness to camera-based target localization, demonstrating the promising capabilities of the proposed technique.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.