Graph learning can effectively characterize the similarity structure of sample pairs, hence multiple kernel clustering based on graph learning (MKC-GL) achieves promising results on nonlinear clustering tasks. However, previous methods confine to a "three-stage" scheme, that is, affinity graph learning, Laplacian construction, and clustering indicator extracting, which results in the information distortion in the step alternating. Meanwhile, the energy of Laplacian reconstruction and the necessary cluster information cannot be preserved simultaneously. To address these problems, we propose a one-stage shifted Laplacian refining (OSLR) method for multiple kernel clustering (MKC), where using the "one-stage" scheme focuses on Laplacian learning rather than traditional graph learning. Concretely, our method treats each kernel matrix as an affinity graph rather than ordinary data and constructs its corresponding Laplacian matrix in advance. Compared to the traditional Laplacian methods, we transform each Laplacian to an approximately shifted Laplacian (ASL) for refining a consensus Laplacian. Then, we project the consensus Laplacian onto a Fantope space to ensure that reconstruction information and clustering information concentrate on larger eigenvalues. Theoretically, our OSLR reduces the memory complexity and computation complexity to O(n) and O(n2) , respectively. Moreover, experimental results have shown that it outperforms state-of-the-art MKC methods on multiple benchmark datasets.