Mainstream semi-supervised learning (SSL) techniques, such as pseudo-labeling and contrastive learning, exhibit strong generalization abilities but lack theoretical understanding. Furthermore, pseudo-labeling lacks the label enhancement from high-quality neighbors, while contrastive learning ignores the supervisory guidance provided by genuine labels. To this end, we first introduce a generalized bias-variance decomposition framework to investigate them. Then, this research inspires us to propose two new techniques to refine them: neighbor-enhanced pseudo-labeling, which enhances confidence-based pseudo-labels by incorporating aggregated predictions from high-quality neighbors; label-enhanced contrastive learning, which enhances feature representation by combining enhanced pseudo-labels and ground-truth labels to construct a reliable and complete symmetric adjacency graph. Finally, we combine these two new techniques to develop an excellent SSL method called GBVSSL. GBVSSL significantly surpasses previous state-of-the-art SSL approaches in standard benchmarks, such as CIFAR-10/100, SVHN, and STL-10. On CIFAR-100 with 400, 2500, and 10,000 labeled samples, GBVSSL outperforms FlexMatch by 3.46%, 2.72%, and 2.89%, respectively. On the real-world dataset Semi-iNat 2021, GBVSSL improves the Top-1 accuracy over CCSSL by 4.38%. Moreover, GBVSSL exhibits faster convergence and enhances unbalanced SSL. Extensive ablation and qualitative studies demonstrate the effectiveness and impact of each component of GBVSSL.