Abstract

The increasing availability of multimodal data holds many promises for developments in millimeter-wave (mmWave) multiple-antenna systems by harnessing the potential for enhanced situational awareness. Specifically, inclusion of non-RF modalities to complement RF-only data in communications-related decisions like beam selection may speed up decision making in situations where an exhaustive search, spanning all candidate options, is required by the standard. However, to accelerate research in this topic, there is a need to collect real-world datasets in a principled manner. This article presents an experimentally obtained dataset, composed of 23 GB of data, which aids in beam selection in vehicle-to-everything mmWave bands, with the goal of facilitating machine learning (ML) in the wireless communication required for autonomous driving. Beyond this specific example, the article describes methodologies of creating such datasets that use time synchronized and heterogeneous types of LiDAR, GPS, and camera images, paired with the RF ground truth data of selected beams in the mmWave band. While we use beam selection as the primary demonstrator, we also discuss how multimodal datasets may be used in other ML-based PHY-layer optimization areas, such as beamforming and localization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call