Abstract

Distributed machine learning is envisioned as the bedrock of future intelligent networks, where agents exchange information with each other to train models collaboratively without uploading data to a central processor. Despite its broad applicability, a downside of distributed learning is the need for iterative information exchange between agents, which may lead to high communication overhead unaffordable in many practical systems with limited communication resources. To resolve this communication bottleneck, we need to devise communication-efficient distributed learning algorithms and protocols that can reduce the communication cost and simultaneously achieve satisfactory learning/optimization performance. Accomplishing this goal necessitates synergistic techniques from a diverse set of fields, including optimization, machine learning, wireless communications, game theory, and network/graph theory. This Special Issue is dedicated to communication-efficient distributed learning from multiple perspectives, including fundamental theories, algorithm design and analysis, and practical considerations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.