Abstract
As an effective way in finding the underlying parameters of a high-dimension space, manifold learning is popular in nonlinear dimensionality reduction which makes high-dimensional data easily to be observed and analyzed. In this paper, Isomap, one of the most famous manifold learning algorithms, is applied to process closing prices of stocks of CSI 300 index from September 2009 to October 2011. Results indicate that Isomap algorithm not only reduces dimensionality of stock data successfully, but also classifies most stocks according to their trends efficiently.
Highlights
Clustering analysis of stocks is necessary when investigating in stocks, Yu and Wang [1] proposed an approach in which kernel principal component analysis is used to reduce the dimensionality of data and k-means clustering method is used to cluster the reduced data so that stocks can be divided into different categories in terms of their financial information
As an effective way in finding the underlying parameters of a high-dimension space, manifold learning is popular in nonlinear dimensionality reduction which makes high-dimensional data to be observed and analyzed
Results indicate that Isomap algorithm reduces dimensionality of stock data successfully, and classifies most stocks according to their trends efficiently
Summary
Clustering analysis of stocks is necessary when investigating in stocks, Yu and Wang [1] proposed an approach in which kernel principal component analysis is used to reduce the dimensionality of data and k-means clustering method is used to cluster the reduced data so that stocks can be divided into different categories in terms of their financial information. Qin Qin et al [5] use two different types of prediction models and multiple technical indicators to predict the Shanghai Composite Index returns and price volatility These methods are proved to be effective by experiments. Manifold learning is an important field of machine learning which acquires very good achievements in exploring inner law of nonlinear data It assumes data is even sampled from a low dimensional manifold embedded in a high dimensional Euclidean space. LLE which is proposed by Roweis and Saul [7] can map high dimensional input data to a global low dimensional coordinate, and maintain relation of neighbors, keep original topology structure for data after dimensionality reduction Some scholars use these data dimensionality reduction methods into other fields and get many interesting results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Intelligent Learning Systems and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.