Abstract

Extra-large massive multiple-input multiple-output (XL-MIMO) is envisioned to be a promising technology for 6G wireless, where the number of antennas is greatly increased to a new extent, resulting in extra large aperture arrays. In this case, different users may see different array parts (termed as visible region, VR) due to the spatial non-stationarity. Exploiting the VR information can facilitate low-complexity transmission design, but how to acquire this information for a large amount of users is still challenging. To this end, we first establish a VR model for XL-MIMO systems and show that a user's VR is location-dependent. Assuming that the VRs of some beacon users can be known a priori, we propose three location-based VR recognition schemes, including Voronoi cell partition, weighted Voronoi cell partition, and a neural network (NN) approach termed VR-Net, which takes the location of a user as input and returns its VR index as output. Simulation results show that all schemes can achieve high VR recognition accuracy if the number of beacon users is sufficiently large. Notably, in the more practical scenario that the available location-VR dataset is limited, the proposed VR-Net is able to achieve much better recognition performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call