Abstract

The latent semantic analysis (LSA) is a mathematical/statistical way of discovering hidden concepts between terms and documents or within a document collection (i.e., a large corpus of text). Each document of the corpus and terms are expressed as a vector with elements corresponding to these concepts to form a term-document matrix. Then, the LSA uses a low-rank approximation to the term-document matrix in order to remove irrelevant information, to extract more important relations, and to reduce the computational time. The irrelevant information is called as “noise” and does not have a noteworthy effect on the meaning of the document collection. This is an essential step in the LSA. The singular value decomposition (SVD) has been the main tool obtaining the low-rank approximation in the LSA. Since the document collection is dynamic (i.e., the term-document matrix is subject to repeated updates), we need to renew the approximation. This can be done via recomputing the SVD or updating the SVD. However, the computational time of recomputing or updating the SVD of the term-document matrix is very high when adding new terms and/or documents to preexisting document collection. Therefore, this issue opened the door of using other matrix decompositions for the LSA as ULV- and URV-based decompositions. This study shows that the truncated ULV decomposition (TULVD) is a good alternative to the SVD in the LSA modeling.

Highlights

  • Academic Editor: Danilo Pianini e latent semantic analysis (LSA) is a mathematical/statistical way of discovering hidden concepts between terms and documents or within a document collection

  • The computational time of recomputing or updating the singular value decomposition (SVD) of the term-document matrix is very high when adding new terms and/or documents to preexisting document collection. erefore, this issue opened the door of using other matrix decompositions for the LSA as ULV- and URV-based decompositions. is study shows that the truncated ULV decomposition (TULVD) is a good alternative to the SVD in the LSA modeling

  • In the LSA where document collections are dynamic over time, i.e., the term-document matrix is subject to repeated updates, the SVD becomes prohibitive due to the high computational expense. us, alternative decompositions have been proposed for these applications such as lowrank ULV/URV decompositions [7] and truncated ULV

Read more

Summary

Notations and Background

Roughout the paper, uppercase letters such as A denote matrices. E n × n identity matrix is denoted by In. the norm · · · denotes the spectral norm, and. · · · F denotes the Frobenius norm. E notation Rm×n represents the set of m × n real matrices. An m × n dimensional matrix A is represented as A [aij] where aij is the entry of A at i row and j column with 1 ≤ i ≤ m and 1 ≤ j ≤ n

Orthogonal Matrix Decompositions
Application
Conclusion
Findings
Disclosure

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.