Abstract

Distributed estimation over multitask networks, where the target parameter vectors (tasks) for different nodes can be different, has received much attention recently. In this paper, we consider some practical application scenarios, where there are some similarities between tasks, and thus inter-task cooperation is beneficial to improve the estimation performance of nodes. In most of the existing multitask learning studies, local estimates are directly transmitted between neighboring nodes and then the adaptive combination strategy is adopted to achieve inter-task cooperation. However, when the target parameter vectors contain sensitive information, direct transmission of local estimates may result in serious privacy breaches. To tackle this problem, we propose a privacy-preserving distributed multitask learning algorithm for collaborative estimation over networks. The proposed algorithm is implemented by a secure multiparty computation protocol designed on an organic combination of multiplicative/additive mask and additively homomorphic encryption technique. While allowing each node to adaptively cooperate with its neighbors, this protocol also preserves the privacy of the local estimates. Besides, we present a thorough privacy analysis of the proposed algorithm. Simulation results show that the proposed algorithm can effectively protect each node's task against leakage without sacrificing the performance of estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call