Abstract
Label Enhancement (LE) strives to convert logical labels of instances into label distributions to provide data preparation for label distribution learning (LDL). Existing LE methods ordinarily neglect to consider original features and logical labels as two complementary descriptive views of instances for extracting implicit related information across views, resulting in insufficient utilization of the feature and logical label information of the instances. To address this issue, we propose a novel method named Dual Contrastive Label Enhancement (DCLE). This method regards original features and logical labels as two view-specific descriptions and encodes them into a unified projection space. We employ dual contrastive learning strategy at both instance-level and class-level to excavate cross-view consensus information and distinguish instance representations by exploring inherent correlations among features, thereby generating high-level representations of the instances. Subsequently, to recover label distributions from obtained high-level representations, we design a distance-minimized and margin-penalized training strategy and preserve the consistency of label attributes. Extensive experiments conducted on 13 benchmark datasets of LDL validate the efficacy and competitiveness of DCLE.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have