Abstract

End-to-end semantic communications (ESC) rely on deep neural networks (DNN) to boost the communication efficiency by only transmitting the semantics of data. However, ESC is data-hungry for training millions of neural parameters, while uploading massive data generated by users/devices to a central server is usually impractical due to privacy issues, regulatory guidelines and network congestion, etc. Inspired by federated learning, where each device contributes to the parameters' update by independently computing the gradient based on its local data, we present UDSem, a unified distributed learning framework of semantic communications for texts and images over wireless networks. The key ingredients of our UDSem include 1) a flexible learning mechanism that can be tailored to a device based on its computing capabilities, 2) an efficient learning approach that splits the system into multiple modules and updates the parameters independently both on clients and the central server, 3) a mixed aggregation method that globally updates the neural parameters for each module of the ESC system. Equipped with these ingredients, our proposed UDSem is able to properly fill the gap between the training efficiency of ESC systems and the privacy requirement of users' data. Experimental results over two public benchmarks show the superiority of our UDSem in terms of convergence speed and semantic interpretation, potentially paving the way for distributed semantic communications in future wireless networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call