Abstract

Images obtained from an illuminated scene often have their original colour contaminated. Colour constancy is a study considering how to restore them. Substantial progress on colour constancy has been made in recent years due to the development of a convolutional neural network (CNN). In a CNN structure, high-level features contain semantic information while low-level features show local details. If both are taken into account, they would help achieve a more accurate illuminant estimation. However, previous works paid little attention to the latter for lack of frameworks, which can combine those two kinds of features together. Inspired by the pyramid model, a top-down network that successively propagates high-level information to low-level layers is proposed. This network, named top-down semantic aggregation for colour constancy (TDCC), takes full advantage of the multi-scale representations with strong semantics. As a result, objects with intrinsic colours are captured and a better estimation is obtained. Experiments on three benchmark datasets demonstrate that TDCC significantly outperforms state-of-the-art colour constancy methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call