Abstract
DEtection TRansformer (DETR) is a recently proposed method that streamlines the detection pipeline and achieves competitive results against two-stage detectors such as Faster-RCNN. The DETR models get rid of complex anchor generation and post-processing procedures thereby making the detection pipeline more intuitive. However, the numerous redundant parameters in transformers make the computation and storage of the DETR models intensive, which seriously hinder them to be deployed on the resources-constrained devices. In this paper, to obtain a compact end-to-end detection framework, we propose to deeply compress the transformers with low-rank tensor decomposition. The basic idea of our tensor-based compression method is to represent the large-scale weight matrix in one network layer with a chain of low-order matrices. Furthermore, we show that redundant attention heads will hinder the performance of detection transformers. We thus propose a gated multi-head attention (GMHA) module to suppress the redundant attention information by normalizing the attention heads. In GMHA, each attention head has an independent gate to determine the passed attention value, thereby down-weighting the uninformative heads. The accuracy drop of the tensor-compressed DETR models can be mitigated by applying GMHA modules. Lastly, to obtain fully compressed DETR models, a low-bitwidth quantization technique is introduced for further reducing the model storage size. Based on the proposed methods, we can achieve significant parameter and model size reduction while maintaining high detection performance. We conduct extensive experiments on the COCO and PASCAL VOC datasets to validate the effectiveness of our tensor-compressed (tensorized) DETR models. The experimental results on the COCO benchmark show that we can attain <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.7\times $ </tex-math></inline-formula> full model compression with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$482\times $ </tex-math></inline-formula> feed forward network (FFN) parameter reduction and only 0.6 points accuracy drop.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.