Abstract

The data privacy concern for mobile users (MUs) in mobile crowdsensing (MCS) has attracted significant attention. Federated Learning (FL) breaks down data silos, enabling MUs to train locally without revealing actual information. However, FL faces challenges from the selfish and malicious behavior of MUs, potentially harming the global model’s performance. To tackle the challenges, we propose a rational, reliable FL framework (RRFL) for MCS. Firstly, utilizing Euclidean distance and tracking malicious behavior frequency, we calculate risk scores for MUs and eliminate outlier updates. Secondly, we design a long-term, fair incentive mechanism, evaluating MUs’ comprehensive reputation based on risk scores from their historical sensing tasks. Rewards are allocated exclusively to consistently outstanding MUs, encouraging honest cooperation in MCS. Finally, we construct an extensive game with imperfect information, deriving the sequential equilibrium to validate the scheme’s reasonableness. Experimental verification on the MNIST dataset demonstrates the effectiveness and reliability of RRFL, with results indicating strong accuracy and overall cost performance. MCS participants achieve the desired maximum utility, with over a 50% reduction in detection costs compared to short-term FL incentive mechanisms in MCS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call