Abstract

Mobile networks and devices provide the users with ubiquitous connectivity, while many of their functionality and business models rely on data analysis and processing. In this context, Machine Learning (ML) plays a key role and has been successfully leveraged by the different actors in the mobile ecosystem (e.g., application and Operating System developers, vendors, network operators, etc.). Traditional ML designs assume (user) data are collected and models are trained in a centralized location. However, this approach has privacy consequences related to data collection and processing. Such concerns have incentivized the scientific community to design and develop Privacy-preserving ML methods, including techniques like Federated Learning (FL) where the ML model is trained or personalized on user devices close to the data; Differential Privacy, where data are manipulated to limit the disclosure of private information; Trusted Execution Environments (TEE), where most of the computation is run under a secure/ private environment; and Multi-Party Computation, a cryptographic technique that allows various parties to run joint computations without revealing their private data to each other.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.