In massive multi-input multi-output (MIMO) systems, it is necessary for user equipment (UE) to transmit downlink channel state information (CSI) back to the base station (BS). As the number of antennas increases, the feedback overhead of CSI consumes a significant amount of uplink bandwidth resources. To minimize the bandwidth overhead, we propose an efficient parallel attention transformer, called EPAformer, a lightweight network that utilizes the transformer architecture and efficient parallel self-attention (EPSA) for CSI feedback tasks. The EPSA expands the attention area of each token within the transformer block effectively by dividing multiple heads into parallel groups and conducting self-attention in horizontal and vertical stripes. The proposed EPSA achieves better feature compression and reconstruction. The simulation results display that the EPAformer surpasses previous deep learning-based approaches in terms of reconstruction performance and complexity.
Read full abstract