Multi-view learning (MVL) is a promising direction and most MVL methods work under the assumption that data are complete in all views. However, this assumption is often violated in practice due to the difficulties of high cost, equipment failure, and so on. Besides, the amount of data with missing views is particularly huge. Therefore, it is challenging yet valuable to fully leverage the large-scale incomplete-view data. Enlightened by the reduced support vector machine (RSVM) and multi-view privileged support vector machine (PSVM-2V), this paper proposes an efficient kernel method called reduced PSVM-2V (RPSVM-2V). It can not only provide a novel solution to process incomplete-view data, but also can be adjusted to address large-scale complete-view learning problem efficiently. In addition, the idea of replacing the full kernel with a smaller rectangular reduced kernel is extended to develop another two MVL methods, i.e., reduced SVM-2K (RSVM-2K) and reduced multiple kernel learning method (RMKL). Furthermore, we analyze the generalization error bounds of RPSVM-2V and two extensions by using Rademacher complexity. The comprehensive experiments demonstrate that our proposed models can achieve comparable performance with less time and memory cost. The spectral analysis has further verified the effectiveness of the reduced kernel used in our models.