Abstract

Lately, Federated Learning (FL) is rapidly evolving in the field of health and quality of life systems. Its ability to train Machine Learning (ML) and Deep Learning (DL) models for the wide variety of Computer Vision (CV) fields that utilize them, one of them being the medical field, while at the same time fortifying the privacy of the sensitive information that describes them, makes the FL technology a necessary tool in modern health and medical CV systems. One of its disadvantages, though, has proven to be the quality of models used for the decentralized learning process and the ability to understand them. Low quality, unethical or biased models used for FL training, usually due to Non-IID (Independent and Identically Distributed) data, can have catastrophic consequences, especially in critical infrastructure in the medical field. In this paper, we tackle the problem of unfairness of DL models in the FL environment by leveraging the ability of latent mapping and representation learning in decision and augmentative DL models while striving to visualize their knowledge distribution. In particular, micro-Manifolds produced from the discovered latent deformations in a DL model are analyzed and, through a proposed quantization pipeline, the fairness of that model is measured in a summary of quantitative metrics. This methodology follows a fully unsupervised model and data agnostic manner to perform ethical evaluation, while it is documented with both medical and widely used benchmark data and DL architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call