Abstract
Data aggregation based on machine learning (ML), in mobile edge computing, allows participants to send ephemeral parameter updates of local ML on their private data instead of the exact data to the untrusted aggregator. However, it still enables the untrusted aggregator to reconstruct participants’ private data, although parameter updates contain significantly less information than the private data. Existing work either incurs extremely high overhead or ignores malicious participants dropping out. The latest research deals with the dropouts with desirable cost, but it is vulnerable to malformed message attacks. To this end, we focus on the data aggregation based on ML in a practical setting where malicious participants may send malformed parameter updates to perturb the total parameter updates learned by the aggregator. Moreover, malicious participants may drop out and collude with other participants or the untrusted aggregator. In such a scenario, we propose a scheme named DAML , which to the best of our knowledge is the first attempt toward verifying participants’ submissions in data aggregation based on ML. The main idea is to validate participants’ submissions via SSVP, a novel secret-shared verification protocol, and then aggregate participants’ parameter updates using SDA, a secure data aggregation protocol. Simulation results demonstrate that DAML can protect participants’ data privacy with preferable overhead.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.