The Non-negative Matrix Factorization (NMF) is a popular technique for intelligent systems, which can be widely used to decompose a nonnegative matrix into two factor matrices: a basis matrix and a coefficient one, respectively. The main objective of NMF is to ensure that the operation results of the two matrices are as close to the original matrix as possible. Meanwhile, the stability and generalization ability of the algorithm should be ensured. Therefore, the generalization performance of NMF algorithms is analyzed from the perspective of algorithm stability and the generalization error bounds are given, which is named AS-NMF. Firstly, a general NMF prediction algorithm is proposed, which can predict the labels for new samples, and then the corresponding loss function is defined further. Secondly, the stability of the NMF algorithm is defined according to the loss function, and two generalization error bounds can be obtained by employing uniform stability in the case where U is fixed and it is not fixed under the multiplicative update rule. The bounds numerically show that its stability parameter depends on the upper bound on the module length of the input data, dimension of hidden matrix and Frobenius norm of the basis matrix. Finally, a general and stable framework is established, which can analyze and measure generalization error bounds for the NMF algorithm. The experimental results demonstrate the advantages of new methods on three widely used benchmark datasets, which indicate that our AS-NMF can not only achieve efficient performance, but also outperform the state-of-the-art of recommending tasks in terms of model stability.
Read full abstract