Recent studies show that the predictive performance of graph neural networks (GNNs) is inconsistent and varies across different experimental runs, even with identical parameters. The prediction variability limits GNNs’ applicability, and the underlying reasons remain unclear. We identified a key factor contributing to this issue: the oscillation of the predicted classes of some nodes during GNN training. To address this problem, we propose a novel framework, known as Graph Relearn Network (GRN), designed to reduce prediction variance by iteratively refining the predictions of unstable nodes. The GRN framework operates in two phases: pre-predict and relearn. During the pre-predict phase, a graph-dense encoder is trained to pre-predict the node categories. In the relearn phase, the model intensively focuses on the unstable nodes to optimize the predictions. Extensive experiments on ten graph datasets demonstrate that the GRN significantly enhances the performance stability of GNNs (with std. reduced by up to 75%), and achieves state-of-the-art performance in prediction accuracy (increased by up to 11.97%). By mitigating the instability introduced by unstable nodes, GRN enhances both the performance stability and prediction accuracy of GNNs in node classification tasks.