Training a Deep Neural Network (DNN) from scratch comes with a substantial cost in terms of money, energy, data, and hardware. When such models are misused or redistributed without authorisation, the owner faces significant financial and intellectual property (IP) losses. Therefore, there is a pressing need to protect the IP of Machine Learning models to avoid these issues. ML watermarking emerges as a promising solution for model traceability. Watermarking has been well-studied for image classification models, but there is a significant research gap in its application to other tasks like object detection, for which no effective methods have been proposed yet. In this paper, we introduce a novel black-box watermarking method for object detection models. Our contributions include a watermarking technique that maps visual information to text semantics and a comparative study of fine-tuning techniques’ impact on watermark detectability. We present the model’s detection performance and evaluate fine-tuning strategies’ effectiveness in preserving watermark integrity.
Read full abstract