Abstract

Pre-trained Models (PTMs) have reached the state-of-the-art (SOTA) on many Natural Language Processing (NLP) tasks and Computer Vision (CV) tasks, and are called the foundation models of artificial intelligence (AI) systems. In this letter, we introduce PTMs into communication receiver design, which is the first attempt to apply PTMs to the communications field. The powerful capability of PTMs to capture knowledge from massive data enables the PTMs-based receivers to utilize the natural redundancy (NR) that widely exists in the transmitted sources to obtain high performance gains. Intended for uncompressed Chinese text sources, we design a high-performance receiver based on the Bidirectional Encoder Representation from Transformers (BERT) model in the NLP field. We achieve an approximate maximum of posterior marginals (MPM) estimation by obtaining the conditional distribution of the input data from the pre-trained BERT model, and the performance of the approximate MPM estimation can be improved by fine-tuning the BERT model with self-supervised learning. Simulation results demonstrate that the performance of our designed PTMs-based receiver significantly outperforms the classical receiver.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call