Abstract
Due to the ability of feature extraction, deep learning (DL)-based methods have been recently applied to channel state information (CSI) compression feedback in massive multiple-input multiple-output (MIMO) systems. Existing DL-based CSI compression methods are usually effective in extracting a certain type of features in the CSI. However, the CSI usually contains two types of propagation features, i.g., non-line-of-sight (NLOS) propagation-path feature and dominant propagation-path feature, especially in channel environments with rich scatterers. To fully extract the both propagation features and learn a dual-feature representation for CSI, this paper proposes a dual-feature-fusion neural network (NN), referred to as DuffinNet. The proposed DuffinNet adopts a parallel structure with a convolutional neural network (CNN) and an attention-empowered neural network (ANN) to respectively extract different features in the CSI, and then explores their interplay by a fusion NN. Built upon this proposed DuffinNet, a new encoder-decoder framework is developed, referred to as Duffin-CsiNet, for improving the end-to-end performance of CSI compression and reconstruction. To facilitate the application of Duffin-CsiNet in practice, this paper also presents a two-stage approach for codeword quantization of the CSI feedback. Besides, a transfer learning-based strategy is introduced to improve the generalization of Duffin-CsiNet, which enables the network to be applied to new propagation environments. Simulation results illustrate that the proposed Duffin-CsiNet noticeably outperforms the existing DL-based methods in terms of reconstruction performance, encoder complexity, and network convergence, validating the effectiveness of the proposed dual-feature fusion design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.