Abstract

Dynamic pruning and conditional convolution are crucial for reducing the computational complexity of convolutional neural networks, enabling their deployment on resource-limited devices. However, existing methods only focus on one aspect in isolation, ignoring the collaborative potential of devices and the similarity between input samples in edge cluster scenarios, making it difficult to eliminate redundancy in a given network structure. In order to reduce computation costs while enhancing model representation, we propose an input-driven dynamic adaptive network ensemble method, namely IDDANet. Our method is a two-stage model synthesis strategy that first dynamically prunes the network by using sample complexity and similarity to obtain a lightweight model and a set of input-dependent combination coefficients. We then combine the convolution parameters of multiple models to synthesize a more accurate ensemble model. As demonstrated through extensive experiments on different datasets, our method can significantly reduce computation complexity while improving model representation ability compared to state-of-the-art methods. For example, our method can reduce 58.5% of the FLOPs of ResNet-18, while only having a 0.1% Top-1 accuracy drop on CIFAR-10.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.