Abstract

Deep subspace clustering methods have achieved impressive clustering performance compared with other clustering algorithms. However, most existing methods suffer from the following problems: 1) they only consider the global features but neglect the local features in subspace self-expressiveness learning; 2) they neglect the discriminative information of each self-expressiveness coefficient matrix; 3) they ignore the useful long-range dependencies and positional information in feature representation learning. To solve these problems, in this paper, we propose a novel multi-scale deep subspace clustering with discriminative learning (MDSCDL) to obtain a high-quality self-expressiveness coefficient matrix. Specifically, MDSCDL bridges multiple fully-connection layers between encoder and decoder to learn multi-scale self-expressiveness coefficient matrices from global and local features, representing the more comprehensive relationship among data. By modeling the interdependencies of the multi-scale self-expressiveness coefficient matrices, MDSCDL adaptively assigns discriminative weights for each matrix and fuses them with convolution operation. Moreover, to increase representation power, MDSCDL introduces the coordinate attention mechanism to extract the long-range dependencies and positional features for subspace self-expressiveness learning. Extensive experiments on the face and object datasets have shown the superiority of MDSCDL compared with several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call