Abstract

Face detection is the basic step of many face-analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep the face detectors lightweight. To this end, current methods usually focus on directly design lightweight detectors. Nevertheless, the resource consumption of the lightweight detectors could be further suppressed. In this paper, we propose to apply the network pruning method to the lightweight face detection network, which can further reduce the face detector's parameters and floating point operations (FLOPs). To identify the channels of less importance, we perform the network training with sparsity regularization on channel scaling factors of each layer. Then, we remove the connections and the corresponding weights with the near-zero scaling factors after the sparsity training. We apply the proposed pruning pipeline on a state-of-the-art face detection method, EagleEye [5], and get a shrunken EalgeEye model which has a reduced number of computing operations and parameters. The shrunken model could achieve comparable accuracy as the unpruned model. By using the proposed method, the EagleEye face detector achieve 57.2% reduction of parameter size with 2% accuracy loss on WiderFace dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.