Abstract

Despite promising advancements in deep learning in medical domains, challenges still remain owing to data scarcity, compounded by privacy concerns and data ownership disputes. Recent explorations of distributed-learning paradigms, particularly federated learning, have aimed to mitigate these challenges. However, these approaches are often encumbered by substantial communication and computational overhead, and potential vulnerabilities in privacy safeguards. Therefore, we propose a self-supervised masked sampling distillation technique called MS-DINO, tailored to the vision transformer architecture. This approach removes the need for incessant communication and strengthens privacy using a modified encryption mechanism inherent to the vision transformer while minimizing the computational burden on client-side devices. Rigorous evaluations across various tasks confirmed that our method outperforms existing self-supervised distributed learning strategies and fine-tuned baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call