Abstract

Most existing neural networks use static filters with fixed model capacity to process all samples. To cover the large variation among different samples, these networks contain considerable redundant computational budgets, thereby limiting their inference efficiency. To reduce these redundant computation for higher inference speed, in this paper, we propose a scalable dynamic filter that can customize its parameters and shapes conditioned on the input sample. Specifically, our scalable dynamic filter uses a kernel prediction head to predict dynamic kernels and employs a channel mask head to adaptively prune redundant channels. As a result, a kernel with dynamic parameters and an adaptive shape is produced for efficient inference. Extensive experiments on both image classification and object detection tasks demonstrate that our method achieves superior performance than previous methods with low computational complexity and high inference efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call