Abstract

In massive multi-input multi-output (MIMO) systems, it is necessary for user equipment (UE) to transmit downlink channel state information (CSI) back to the base station (BS). As the number of antennas increases, the feedback overhead of CSI consumes a significant amount of uplink bandwidth resources. To minimize the bandwidth overhead, we propose an efficient parallel attention transformer, called EPAformer, a lightweight network that utilizes the transformer architecture and efficient parallel self-attention (EPSA) for CSI feedback tasks. The EPSA expands the attention area of each token within the transformer block effectively by dividing multiple heads into parallel groups and conducting self-attention in horizontal and vertical stripes. The proposed EPSA achieves better feature compression and reconstruction. The simulation results display that the EPAformer surpasses previous deep learning-based approaches in terms of reconstruction performance and complexity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.