We study end-to-end learning-based frameworks for amplify-and-forward (AF) relay networks, with and without the channel state information (CSI) knowledge. The designed framework resembles an autoencoder (AE) where all the components of the neural network (NN)-based source and destination nodes are optimized together in an end-to-end manner, and the signal transmission takes place with an AF relay node. Unlike the literature that employs an NN-based relay node with full CSI knowledge, we consider a conventional relay node that only amplifies the received signal using CSI gains. Without the CSI knowledge, we employ power normalization-based amplification that normalizes the transmission power of each block of symbols. We propose and compare symbol-wise and bit-wise AE frameworks by minimizing categorical and binary cross-entropy loss that maximizes the symbol-wise and bit-wise mutual information (MI), respectively. We determine the estimated MI and examine the convergence of both AE frameworks with signal-to-noise ratio (SNR). For both these AE frameworks, we design coded modulation and differential coded modulation, depending upon the availability of CSI at the destination node, that obtains symbols in 2n-dimensions, where n is the block length. To explain the properties of the 2n-dimensional designs, we utilize various metrics like minimum Euclidean distance, normalized second-order and fourth-order moments, and constellation figures of merit. We show that both these AE frameworks obtain similar spherical coded-modulation designs in 2n-dimensions, and bit-wise AE that inherently obtains the optimal bit-labeling outperforms symbol-wise AE (with faster convergence under low SNR) and the conventional AF relay network with a considerable SNR margin.
Read full abstract