Recently, bidirectional principal component analysis (BDPCA) has been proven to be an efficient tool for pattern recognition and image analysis. Encouraging experimental results have been reported and discussed in the literature. However, BDPCA has to be performed in batch mode, it means that all the training data has to be ready before we calculate the projection matrices. If there are additional samples need to be incorporated into an existing system, it has to be retrained with the whole updated training set. Moreover, the scatter matrices of BDPCA are formulated as the sum of K (samples size) image covariance matrices, this leads to the incremental learning directly on the scatters impossible, thus it presents new challenge for on-line training. In fact, there are two major reasons for building incremental algorithms. The first reason is that in some cases, when the number of training images is very large, the batch algorithm cannot process the entire training set due to large computational or space requirements of the batch approach. The second reason is when the learning algorithm is supposed to operate in a dynamical settings, that all the training data is not given in advance, and new training samples may arrive at any time, and they have to be processed in an on-line manner. Through matricizations of third-order tensor, we successfully transfer the eigenvalue decomposition problem of scatters to the singular value decomposition ( SVD ) of corresponding unfolded matrices, followed by complexity and memory analysis on the novel algorithm. A theoretical clue for selecting suitable dimensionality parameters without losing classification information is also presented in this paper. Experimental results on FERET and CMU PIE (pose, illumination, and expression) databases show that the IBDPCA algorithm gives a close approximation to the BDPCA method, but using less time.