Abstract

AbstractIn healthcare, patient information is a sparse critical asset considered as private data and is often protected by law. It is also the domain which is least explored in the field of Machine Learning. The main reason for this is to build efficient artificial intelligence (AI) based models for preliminary diagnosis of various diseases, it would require a large corpus of data which can be obtained by pooling in patient information from multiple sources. However, for these sources to agree to sharing their data across distributed systems for training algorithms and models, there has to be an assurance that there will be no disclosure of the personally identifiable information (PII) of the respective Data Owners. This paper proposes PriMed, an approach to build robust privacy preserving additions to convolutional neural networks (CNN) for training and performing inference on medical images without compromising privacy. Since privacy of the data is preserved, large amounts of data can be effectively accumulated to increase the accuracy and efficiency of AI models in the field of healthcare. This involves implementing a hybrid of privacy‐enhancing techniques like Federated Learning, Differential Privacy, and Homomorphic Encryption to provide a private and secure environment for learning through data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call