Abstract

Individuals with color vision deficiencies (CVDs) often face significant challenges in accessing vital information for decision-making. In response, we introduce EnColor—a deep Encoder-decoder Color corrector for images, enabling individuals with CVDs to perceive the contents in originally intended colorization. Our network architecture is designed to effectively capture essential visual features for reconstructing standard images into color-corrected versions. In particular, our training pipeline is integrated with a CVD simulator so as to ensure the fidelity of output throughout the lens of individuals with impaired color vision. For evaluation, we focus primarily on tomato images, considering the profound impact of color vision deficiencies on practical domains like agri-food systems. Our quantitative results demonstrate that the EnColor model achieves over 16.8% improvement over previously introduced algorithms in terms of color retention, supporting our design choices. Furthermore, a survey with 43 participants provides subjective assessments with the highest scores on our method. Additionally, specific visual examples are presented to highlight accurately restored colors. We also publicly share all our codes of EnColor as well as the baseline methods to ensure reproducibility and facilitate more studies in CVD correction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call