Abstract

Adaptable methods for representing higher-order data with various features and high dimensionality have been demanded by the increasing usage of multi-sensor technologies and the emergence of large data sets. Arrays of multi-dimensional data, known as tensors, can be found in a variety of applications. Standard data that depicts things from a single point of view lacks the semantic richness, utility, and complexity of multi-dimensional data. Research into multi-clustering has taken off since traditional clustering methods are unable to handle large datasets. There are three main kinds of multi-clustering algorithms: Self-weighted Multiview Clustering (SwMC), Latent Multi-view Subspace Clustering (LMSC), and Multi-view Subspace Clustering with Intactness-Aware Similarity (MSC IAS) that are explored in this paper. To evaluate their performance, we do in-depth tests on seven real-world datasets. The three most important metrics Accuracy (ACC), normalized mutual information (NMI), and purity are grouped. Furthermore, traditional Principal Component Analysis (PCA) cannot uncover hidden components within multi-dimensional data. For this purpose, tensor decomposition algorithms have been presented that are flexible in terms of constraint selection and extract more broad latent components. In this examination, we also go through the various tensor decomposition methods, with an emphasis on the issues that classical PCA is designed to solve. Various tensor models are also tested for dimensionality reduction and supervised learning applications in the experiments presented here.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call