Abstract

Color medical images introduce an additional confounding factor compared to conventional grayscale medical images: color variability. This variability can lead to inconsistent evaluation by clinicians and the misinterpretation or suboptimal learning process of automatic quantitative algorithms. To mitigate the potential negative consequences of color variability, several color normalization strategies have been developed, proving to be effective in standardizing image appearance. In this paper, we present a novel paradigm for color normalization using generative adversarial networks (GANs). Our method focuses on standardizing images in the field of digital pathology (stain normalization) and dermatology (color constancy), where high color variability is consistently observed. Specifically, we formulate the color normalization task as an image-to-image translation problem, ensuring a pixel-to-pixel correspondence between the original and normalized images. Our approach outperforms existing state-of-the-art methods in both the digital pathology and dermatology fields. Extensive validation using public datasets demonstrate the effectiveness of our color normalization results on entirely external test sets. Our framework exhibits strong generalization capability on unseen data, making it suitable for inclusion in the pipeline of automatic quantitative algorithms to reduce color variability and improve segmentation and/or classification performance. Lastly, we provide the source code of our models to encourage open science.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call