Abstract

Non-linear precoding (NLP) for downlink multi-user multiple-input multiple-output (DL-MU-MIMO) trans- mission is receiving much attention as a promising technology to achieve high capacity within the limited bandwidths available to radio access systems. In order to minimize the required transmission power for DL-MU- MIMO and achieve high spectrum efficiency, Vector Perturbation (VP) was proposed as an optimal NLP scheme. Unfortunately, the original VP suffers from significant computation complexity in searching the optimal perturbation vector due to the infinite number of perturbation vector candidates. To achieve the complexity reduction with the transmission performance near to that of Sphere Encoding VP (SE-VP), several recent studies have investigated various efficient NLP schemes based on Tomlinson-Harashima precoding (THP) approach that applies successive pre-cancellation of inter-user interference (IUI) and offsets the transmission vector based on the modulo operation. This paper investigates transmission performance improvement of Lattice Reduction Aided THP (LRA THP), called Extended LRA THP, where the modulo operation is substituted by seaching perturbation vector and subtracting this perturbation vector from the transmit signal vector. This paper proposes an extension of LRA THP with Minimum Mean Square Error based successive IUI pre-cancellation called Extended LRA MMSE-THP as a novel NLP scheme. Computer simulations quantitatively clarify that our proposed Extended LRA MMSE-THP achieves better transmission performance than the conventional NLP techniques while decreasing the number of searches required to determine the optimal perturbation vector.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.