Kernel discriminant subspace learning technique is effective to exploit the structure of image dataset in the high-dimensional nonlinear space. However, for large-scale image recognition applications, this technique usually suffers from large computational burden. Although some kernel accelerating methods have been presented, how to greatly reduce computing time and simultaneously keep favorable recognition accuracy is still challenging. In this paper, we introduce the idea of parallel computing into kernel subspace learning and build a parallel kernel discriminant subspace learning framework. In this framework, we firstly design a random non-overlapping equal data division strategy to divide the whole training set into several subsets and assign each computational node a subset. Then, we separately learn kernel discriminant subspaces from these subsets without mutual communications and finally select the most appropriate subspace to classify test samples. Under the built framework, we propose two novel kernel subspace learning approaches, i.e., parallel kernel discriminant analysis (PKDA) and parallel kernel semi-supervised discriminant analysis (PKSDA). We show the superiority of the proposed approaches in terms of time complexity as compared with related methods, and provide the fundamental supports for our framework. For experiment, we establish a parallel computing environment and employ three public large-scale image databases as experiment data. Experimental results demonstrate the efficiency and effectiveness of the proposed approaches.