Abstract
Network compression methods minimize the number of network parameters and computation costs while maintaining desired network performance. However, the safety assurance of many compression methods is based on a large amount of experimental data, whereas unforeseen incidents beyond the experiment data may result in unsafe consequences. In this work, we developed a discrepancy computation method for two convolutional neural networks by giving a concrete value to characterize the maximum output difference between the two networks after compression. Using Imagestar-based reachability analysis, we propose a novel method to merge the two networks to compute the difference. We illustrate reachability computation for each layer in the merged network, such as the convolution, max pooling, fully connected, and ReLU layers. We apply our method to a numerical example to prove its correctness. Furthermore, we implement our developed methods on the VGG16 model with the Quantization Aware Training (QAT) compression method; the results show that our approach can efficiently compute the accurate maximum output discrepancy between the original neural network and the compressed neural network.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.