Abstract

Peer-to-peer (P2P) learning is a decentralized approach to organizing the collaboration between end devices known as agents. Agents contain heterogeneous data, and that heterogeneity is disrupting the convergence and accuracy of the collectively learned models. A common technique to mitigate the negative impact of heterogeneous data is to arrange the learning process in a multi-task setting where each task, although it has the same learning objective, is learned separately. However, the multi-task technique can also be applied to solve distinct learning tasks. This paper presents and evaluates a novel approach that utilizes an encoder-only transformer model to enable collaboration between agents learning two distinct Natural Language Processing (NLP) tasks. The evaluation of the approach studied revealed that collaboration among agents, even when working towards separate objectives, can result in mutual benefits, mainly when the connections between agents are carefully considered. The multi-task collaboration led to a statistically significant increase of 11.6% in the mean relative accuracy compared to the baseline results for individual tasks. To our knowledge, this is the first study demonstrating a successful and beneficial collaboration between two distinct NLP tasks in a peer-to-peer setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call