Abstract

Researchers have suggested leveraging label correlation to deal with the exponentially sized output space of label distribution learning (LDL). Among them, some have proposed to exploit local label correlation. They first partition the training set into different groups and then exploit local label correlation on each one. However, these works usually apply clustering algorithms, such as K -means, to split the training set and obtain the clustering results independent of label correlation. The structures (e.g., low rank and manifold) learned on such clusters may not efficiently capture label correlation. To solve this problem, we put forward a novel LDL method called LDL by partitioning label distribution manifold (LDL-PLDM). First, it jointly bipartitions the training set and learns the label distribution manifold to model label correlation. Second, it recurses until the reconstruction error of learning the label distribution manifold cannot be reduced. LDL-PLDM achieves label-correlation-related partition results, on which the learned label distribution manifold can better capture label correlation. We conduct extensive experiments to justify that LDL-PLDM statistically outperforms state-of-the-art LDL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call