In recent years, the deployment and operation of convolution neural networks on edge devices with limited computing capabilities have become increasingly challenging due to the large network structure and computational cost. Currently, the mainstream structured pruning algorithms mainly compress the network at the filter or layer level. However, these methods introduce too much human intervention with large granularities, which may lead to unpredictable performance after compression. In this paper, we propose a group-based automatic pruning algorithm(GAP) via kernel fusion to automatically search for the optimal pruning structure in a more fine-grained manner. Specifically, we first adopt a novel nonlinear dimensionality reduction clustering algorithm to divide the filters of each convolution layer into groups of equal size. Afterwards, we encode the mutual distribution similarity of the kernels within each group, and its KL divergence is employed as an importance indicator to determine the retained kernel groups through weighted fusion. Subsequently, we introduce an intelligent searching module that automatically explore and optimize the pruned structure of each layer. Finally, the pruned filters are permutated to form a dense group convolution and fine-tuned. Sufficient experiments show that, on two image classification datasets, for five advanced CNN models, our GAP algorithm outperforms most extant SOTA schemes, reduces artificial intervention, and enables efficient end-to-end training of compact models.