Abstract

In addition to federated optimization, more current studies focus on incentive mechanism design problems for federated learning (FL), stimulating data owners to share their resources securely. Most existing works only considered data quantity but neglected other key factors like data quality and training time prediction. In combination with all the above factors, we proposed an online quality-aware incentive mechanism based on multi-dimensional reverse auction, QuoTa, for achieving fast FL. In particular, it first designs model quality detection to eliminate some malicious or dispensable devices based on their historical behaviors and marginal contributions. Due to the possible fluctuations of CPU frequency in realistic model training, it next predicts model training time based on the upper confidence bound algorithm. By combining the two modules, QuoTa incentivizes data owners with high data quality, high computing capability, and low cost to participate in the FL process. By rigorous theoretical proof and extensive experiments, we prove that QuoTa satisfies all desired economic properties and achieves higher model accuracy and less convergence time than the state-of-the-art work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call