Abstract

In this paper, we consider the symmetric matrix optimization problem arising in the process of unsupervised feature selection. By relaxing the orthogonal constraint, this problem is transformed into a constrained symmetric nonnegative matrix optimization problem, and an efficient algorithm is designed to solve it. The convergence theorem of the new algorithm is derived. Finally, some numerical examples show that the new method is feasible. Notably, some simulation experiments in unsupervised feature selection illustrate that our algorithm is more effective than the existing algorithms.

Highlights

  • The symbols Tr ( B), B T stand for the trace and transpose of the matrix B, respectively

  • 0.4563 In Figures 3 and 4, we present the clustering accuracy and the normalized mutual information, expressed as the number of selected feature changes

  • The symmetric matrix optimization problem in the area of unsupervised feature selection is considered in this paper

Read more

Summary

Introduction

Throughout this paper, we use Rn×m to denote the set of m × n real matrices. We writeB ≥ 0 if the matrix B is nonnegative. Throughout this paper, we use Rn×m to denote the set of m × n real matrices. B ≥ 0 if the matrix B is nonnegative. The symbols Tr ( B), B T stand for the trace and transpose of the matrix B, respectively. The symbol kαk stands for the l2 -norm of the vector α, i.e., kαk = (α T α) 2. The symbol k Bk F stands for the Frobenius norm of the matrix B. The symbol I p stands for the p × p identity matrix. For the matrices A and B, A B denotes the Hadamard product of A and B. The symbol max { x, y} represents the greater of x and y

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call