Research on image classification sparked the latest deep-learning boom. Many downstream tasks, including semantic segmentation, benefit from it. The state-of-the-art semantic segmentation models are all based on deep learning, and they sometimes make some semantic mistakes. In a semantic segmentation dataset with a small number of categories, images are often collected from a single scene, and there is a close semantic connection between any two categories. However, in the semantic segmentation dataset collected from multiple scenes, two categories may be irrelevant. The probability of objects in one category appearing next to objects in other categories is different, which is the basis of the paper. Semantic segmentation methods need to solve two problems of positioning and classification. This paper is dedicated to correcting those clearly wrong classifications that are contrary to reality. Specifically, we first calculate the relevancy between different class pairs. Then, based on this knowledge, we infer the category of a connected component according to the relationships of this connected component with its surrounding connected components and correct the obviously wrong classifications made by a deep learning semantic segmentation model. Several well-performing deep learning models are experimented on two challenging public datasets in the field of semantic image segmentation. Our proposed method improves the performance of UPerNet, OCRNet and SETR from 40.7%, 43% and 48.64% to 42.07%, 44.09% and 49.09% mean IoU on the ADE20K validation set, and the performance of PSPNet, DeepLabV3 and OCRNet from 37.26%, 37.3% and 39.5% to 38.93%, 38.95% and 40.63% mean IoU on the COCO-Stuff dataset, which shows the effectiveness of the method.