Deep learning has revolutionized communication systems by introducing innovative approaches to address channel impairments through end-to-end models. Autoencoders, a type of deep learning architecture, are adept at learning compact data representations. However, conventional autoencoders in end-to-end models can suffer from overfitting, which limits their effectiveness in noisy communication environments. To address this issue, we propose a Sparse Autoencoder-based (SAE) model that enforces sparsity and promotes the extraction of robust features. Despite its effectiveness, the SAE model may still lack the ability to focus on the most relevant features of the input data. To overcome this limitation, we further introduce an Attention Mechanism-based Sparse Autoencoder (ASA) model. This model integrates the feature extraction capabilities of a sparse autoencoder with an attention mechanism that selectively highlights informative features of the signal. Through simulations, we demonstrate that both proposed models significantly improve M-PSK and M-QAM communication system performance. When trained at 7 dB, both proposed models exhibit significant performance improvements at higher testing average SNRs. Our results show that the SAE model outperforms the conventional Maximum Likelihood Detection (MLD) model and baseline autoencoder systems but suffers from error floor issues. The SAE model suffers from an error floor at average SNRs beyond 16 dB for BPSK and 14 dB for higher-order modulation schemes. As the value of M increases, the performance gap between the MLD and the proposed SAE model narrows. The ASA model, however, effectively mitigates the error floor observed in the SAE model for all values of M and across all modulation schemes. This research highlights the benefits of integrating an attention mechanism with SAE, resulting in enhanced robustness and reliability in communication systems characterized by improved accuracy and reduced error rates.
Read full abstract