Abstract
<h2>Abstract</h2> Privacy-preservation is of key importance for the transition of modern deep learning algorithms into everyday applications dealing with sensitive data, such as healthcare, finance and several other domains of critical infrastructure. One major impediment of research in computer science is the considerable time investment required to set up experiments and their evaluation. In the domain of privacy-preserving deep learning, this is aggravated by the dispersion of implementations throughout frameworks and libraries. This work introduces and documents PPML-TSA, a versatile framework for privacy-preserving time series classification. Our framework was initially used to evaluate privacy-preserving methods across different model architectures and datasets. PPML-TSA offers a modular design suitable for performing classification on all datasets from the entire UCR & UEA repository. Its modular implementation offers quick and easy adaptation and extension to increase the number of supported model architectures and datasets. The code supports a variety of model architectures (such as AlexNet, FCN, FDN, LSTM, and LeNet) as well as privacy-preserving deep learning methods (Differential Privacy, Federated Learning, their combination and Homomorphic Encryption) out-of-the-box. We believe that our framework facilitates further research on privacy-preserving deep learning, resulting in accelerated innovation and disruption in the field.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.