Recent methods introduce semantics obtained from pre-trained classification models into color constancy to guide the model in learning object-color mapping, thereby improving the illumination estimation ability. However, the task discrepancy between classification and color constancy leads to a representation learning gap, resulting in semantic loss during training. Moreover, semantic-based methods emphasize semantic regions, leading to the neglect of clues contained in non-semantic regions. Overall, semantic-based methods face the challenge of underutilizing critical information. To address this problem, we propose a Semantic Preserving Network (SPNet). We first design a Semantic Constraint Module (SCM) in SPNet. SCM provides semantic constraints to ensure uninterrupted semantic knowledge transfer during training, which prevents semantic loss. Additionally, we further propose an Auxiliary Calibration Module (ACM). ACM explores background regions with a restricted range of innate colors. The high consistency property of their color contributes to the calibration of illumination colors. Meanwhile, ACM recalibrates local–global consistency to avoid large estimation bias in other background regions where illumination clues are insufficient. Extensive experiments on the benchmark datasets show that our method achieves superior performance without extra inference consumption.