INTRODUCTION: Cervical cancer is the most common malignant tumor in the female reproductive system, with the number of deaths due to cervical cancer in developing countries accounting for 80% of the global total. In China, the incidence rate of cervical cancer is increasing year by year. At present, the commonly used methods for cervical cancer screening include TCT, HPV testing, TCT+HPV combined testing, FRD, and VIA/VILI. Among them, although TCT+HPV combined testing has high sensitivity and specificity, it is costly and time-consuming. VIA/VILI screening is cost-effective, easy to operate, and suitable for promotion in economically underdeveloped areas.However, VIA/VILI screening relies on the subjective judgment of doctors, so its accuracy is relatively low in rural areas of China with a large population and a lack of well-trained doctors. To address this issue, computer-aided diagnosis (CAD) technology is needed to improve the accuracy and reliability of VIA/VILI screening. OBJECTIVES: The implementation of artificial intelligence (AI)-based Visual Inspection with Acetic acid (VIA) screening and computer-aided diagnosis has the potential to significantly reduce the cost of cervical cancer screenings and increase the coverage rate of cervical cancer screenings, thus reducing the incidence rate of the disease. To this end, we have developed an AI preprocessing algorithm aimed at improving the accuracy of AI in detecting cervical cancer. METHODS: Initially, the algorithm maps images to the YCrCb and Lab color spaces. Unlike traditional enhancement methods that mainly focus on the luminance channel, our method primarily enhances the Cr channel in the YCrCb color space and a channel in the Lab color space. This paper innovatively proposes the LT_CLAHE algorithm to enhance the Cr channel, obtaining an enhanced image with a bias towards blue-green colors, and uses the WLS algorithm to enhance the a channel, obtaining an enhanced image with a bias towards red colors. Subsequently, the enhanced images from both color spaces are fused to eliminate color distortion. RESULTS: Experimental results show that our method significantly enhances the texture of lesions and outperforms traditional methods across various objective indicators. When the enhanced images from this paper are used as input for neural networks, there is also a significant increase in the accuracy of neural network detection.
Read full abstract