Training convolutional neural networks (CNNs) using back-propagation (BP) is a time-consuming and resource-intensive process, primarily due to the need to iterate over the dataset multiple times. In contrast, analytic learning aims to train neural networks in a single epoch, offering a potential solution to these challenges. However, existing studies of analytic learning have been limited to multilayer perceptrons (MLPs). In this article, we propose an analytic formulation for convolutional neural network learning (ACnnL), which represents a significant advancement towards non-iterative learning paradigms for CNNs. Our formulation demonstrates that ACnnL extends the principles of MLP regularization constraints. From the implicit regularization and network interpretability viewpoints, we provide insights into why CNNs often exhibit superior generalization capabilities. The ACnnL is validated by conducting classification tasks on benchmark datasets such as MNIST, FashionMNIST, CIFAR10, CIFAR100 and Tiny-ImageNet. It is encouraging that the ACnnL trains CNNs in a significantly fast manner with reasonably close prediction accuracies to those using BP. In particular, a 5-layer vanilla CNN trained by ACnnL gave an accuracy of 0.9931, 0.9155, 0.7049 and 0.4628 for these datasets. The ACnnL achieves training speeds that are approximately 17 times faster than BP on GPU and 113 times faster than BP on CPU, while maintaining competitive prediction accuracies. Moreover, our experiments disclose a unique advantage of ACnnL under the small-sample scenario when training data are scarce or expensive. In a nutshell, an analytic method which deals well with small-sample size data has been put forward for the first time for fast CNN training with inherent network interpretability.