Abstract

AbstractDeep learning‐based models have become ubiquitous across a wide range of applications, including computer vision, natural language processing, and robotics. Despite their efficacy, one of the significant challenges associated with deep neural network (DNN) models is the potential risk of copyright leakage due to the inherent vulnerability of the entire model architecture and the communication burden of the large models during publishing. So far, it is still challenging for us to safeguard the intellectual property rights of these DNN models while reducing the communication time during model publishing. To this end, this paper introduces a novel approach using knowledge distillation techniques aimed at training a surrogate model to stand in for the original DNN model. To be specific, a knowledge distillation generative adversarial network (KDGAN) model is proposed to train a student model capable of achieving remarkable performance levels while simultaneously safeguarding the copyright integrity of the original large teacher model and improving communication efficiency during model publishing. Herein, comprehensive experiments are conducted to showcase the efficacy of model copyright protection, communication‐efficient model publishing, and the superiority of the proposed KDGAN model over other copyright protection mechanisms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.