Label distribution learning represents the relevance of labels to samples using description degree, which can provide richer semantic information, thus finding wider applications. Exploiting label correlations is an effective approach to narrow down the hypothesis space of label distribution learning models. In existing works that utilize low-rank assumptions or label linear dependence to mine correlations, it is assumed that a label can be linearly expressed by other labels. However, this assumption can only be satisfied when there are linear dependency relationships between labels, thus the label correlation obtained by such methods is subject to certain distortion. To address this issue, this paper assumes that labels can be linearly represented by the same set of bases. The correlation between labels is represented by sharing common bases. Specifically, the paper employs matrix factorization to extract bases that can be used to represent all labels. And then designs a label distribution learning algorithm based on the property of sharing the same set of bases of the ground truth label distribution and predict label distribution. The effectiveness of the algorithm is verified through experimental validation. Generally speaking, the algorithm presented in this paper achieves optimal performance at 73.15% of the cases, with the best average ranking. In the two-tailed t-test, the algorithm in this paper exhibits statistical superiority compared to all comparison algorithms.
Read full abstract