Nowadays, pruning techniques have drawn attention to convolutional neural networks (CNNs) for reducing the consumption of computation resources. In particular, the Taylor-based method simplifies the evaluation of importance for each filter as the product of the gradient and weight value of the output features, which outperforms other methods in reductions of parameters and floating point operations (FLOPs). However, the Taylor-based method sacrifices too much accuracy when the overall pruning rate is relatively large compared with other pruning algorithms. In this article, we propose a self-adaptive attention factor (SAAF) to improve the performance of the slimmed model when conventional Taylor-based pruning is utilized under higher pruning. Specifically, SAAF can be calculated by leveraging the remaining ratio of filters at the early pruning stage of the Taylor-based method, and then, some pruned filters can be recovered for improving the accuracy of the slimmed model in terms of SAAF. It means that SAAF can protect filters from being overslimmed to eliminate the degeneration of Taylor-based pruning when the pruning rate is large as well as can compress models apparently across various datasets. We test the efficiency of SAAF on VGG-16 and ResNet-50 with CIFAR-10, Tiny-ImageNet, ImageNet-1000, and remote sensing images. Our method outperforms the traditional Taylor-based method obviously in accuracy, and there are only tiny sacrifices in the reduction of parameters and FLOPs, which is better than other pruning methods.