Deep neural network modeling for financial time series analysis
Deep neural network modeling for financial time series analysis
266
- 10.1016/j.energy.2020.119708
- Dec 24, 2020
- Energy
12
- 10.58496/bjai/2024/004
- Mar 2, 2024
- Babylonian Journal of Artificial Intelligence
1
- 10.20944/preprints202110.0049.v2
- Oct 12, 2021
637
- 10.1080/07474938.2010.481556
- Aug 30, 2010
- Econometric Reviews
100
- 10.1016/j.procs.2019.11.254
- Jan 1, 2019
- Procedia Computer Science
287
- 10.1016/j.neucom.2020.03.011
- Mar 12, 2020
- Neurocomputing
360
- 10.1016/j.knosys.2018.10.034
- Nov 22, 2018
- Knowledge-Based Systems
89
- 10.1016/j.jbankfin.2010.12.007
- Dec 21, 2010
- Journal of Banking & Finance
1768
- 10.1061/(asce)0733-947x(2003)129:6(664)
- Oct 15, 2003
- Journal of Transportation Engineering
22
- 10.3390/econometrics9040043
- Dec 4, 2021
- Econometrics
- Research Article
- 10.55041/ijsrem43678
- Apr 3, 2025
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Financial statement analysis is an essential process that helps businesses, investors, and other stakeholders assess an organization's financial health, stability, and overall performance. It involves a detailed examination of financial data, including income statements, balance sheets, and cash flow statements, to gain insights into a company’s profitability, liquidity, and solvency. Various analytical techniques, such as ratio analysis, trend analysis, and comparative financial analysis, are applied to interpret financial information effectively. Ratio analysis helps measure key financial aspects like profitability, liquidity, and financial leverage, while trend analysis enables the identification of financial patterns and forecasting future performance. Comparative analysis allows businesses to benchmark their financial position against competitors and industry standards, ensuring they stay competitive in the market. This study aims to explore the significance of financial statement analysis, its methodologies, and its impact on decision-making. Financial analysis provides valuable information that aids in strategic planning, investment decisions, risk management, and overall business sustainability. By thoroughly understanding the financial strengths and weaknesses of a company, stakeholders can make well-informed strategic choices, optimize resource allocation, and improve operational efficiency. Additionally, financial analysis plays a crucial role in assessing the creditworthiness of businesses, enabling banks and financial institutions to determine loan eligibility and lending risks. Investors rely on financial statement analysis to evaluate the potential risks and returns associated with investments, ensuring that they make sound financial decisions. While financial statement analysis offers numerous benefits, it also comes with challenges, such as data inaccuracy, financial statement manipulation, and market fluctuations, which can impact the reliability of the analysis. Additionally, financial models can be complex, requiring expertise to interpret data accurately. The integration of advanced technologies, such as artificial intelligence and big data analytics, has improved the efficiency and accuracy of financial analysis, enabling businesses to make data-driven decisions with greater confidence. Furthermore, this study examines how financial analysis influences corporate sustainability and long-term business growth. Companies that implement strong financial analysis practices can better manage their cash flow, reduce financial risks, and enhance overall profitability. A well-structured financial analysis framework ensures transparency, regulatory compliance, and investor confidence, contributing to business stability. As financial analysis continues to evolve with technological advancements, businesses and investors must adopt modern analytical tools to enhance their decision-making capabilities Keywords : Financial statement analysis, financial health, business performance, investors, stakeholders, income statement, balance sheet, cash flow statement, ratio analysis, trend analysis, comparative analysis, profitability, liquidity, solvency, strategic planning, investment decisions, risk management, business sustainability, resource allocation, operational efficiency, creditworthiness, financial institutions. .
- Conference Article
10
- 10.2118/181803-ms
- Aug 24, 2016
The Bakken unconventional resource in Williston Basin is becoming an important component of hydrocarbon sources in North America. Although advanced horizontal drilling and multi-stage stimulation techniques being successful to exploit Bakken formations, appropriate completion and stimulation designs are crucial to maximize the well productivity and oil recovery of the shale or tight reservoirs. Data-driven approaches, such as deep neural network modeling and global sensitivity analysis, are promising methods to evaluate hidden correlations between multi-stage hydraulic fracturing strategies and well productions. In this study, a total of 2919 wells including 2780 multi-stage hydraulically fractured horizontal wells and 139 vertical wells in the Bakken formation were collected and analyzed using a deep learning method and Sobol's sensitivity analysis. 18 features of a single horizontal well were extracted from the raw data resources. These features contain five aspects: well location and geometry, Bakken formation thickness, hydraulic fracture characterization, fracturing fluid, and proppant. 6 months' and 18 months' cumulative production were utilized to characterize the after-stimulation performance of horizontal wells. In the deep learning model, one-hot encoding method was used to deal with the categorical data. Xavier initialization, dropout technique, Batch Normalization, Adadelta optimizer were applied to develop the reliable deep neural network. Moreover, k-fold cross validation approach was used to evaluate the prediction ability and robustness of the models. Finally, the importance of each parameter was studied using the Sobol's sensitivity analysis. Deep learning neural networks were firstly and successfully developed to predict the oil production of hydraulically fractured horizontal wells. The overall performance of deep network models is acceptable with a low mean squared error between estimated and measured oil production. The configuration type of networks has little effect on the performance of deep network models. While the number of layers and neurons in each layer have a significant effect on the performance of models. The best number of layers ranges from 4 to 7, and best number of neurons in each layer is between 100 and 200. Sobol's sensitivity analysis indicates that the average proppant placed per stage is the most important parameter contributing to ~ 35% of both the 6 months' and 18 months' cumulative oil production variability. Moreover, the interaction effects among parameters should be considered during the hydraulic fracturing design. Results from this work can be used to optimize multi-stage hydraulic fracturing design for new Bakken wells. The proposed deep learning method and sensitivity analysis provide a potential workflow to evaluate well after-stimulation performance for multi-stage fractured horizontal wells and can be integrated into reservoir decision-making routines.
- Research Article
36
- 10.1016/j.jcp.2021.110782
- Oct 15, 2021
- Journal of Computational Physics
Deep neural network modeling of unknown partial differential equations in nodal space
- Research Article
- 10.1080/17499518.2024.2443457
- Dec 20, 2024
- Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards
Dynamic properties, such as shear modulus and damping ratio, are critical for civil engineering applications and essential for accurate dynamic response analysis. This study introduces a novel Deep Neural Network (DNN) approach to predict the normalized shear modulus (G/Gmax ) and damping ratio (D) of granular soils over a wide strain range. Utilising a comprehensive dataset from cyclic triaxial (CT) and resonant column (RC) tests, we developed a Deep Feed-Forward Neural Network (DFFNN) model. The model incorporates grading characteristics, shear strain, void ratio, mean effective confining pressure, consolidation stress ratio, and specimen preparation method as inputs. The DFFNN demonstrated high accuracy with testing results of 0.9830 for G/Gmax and 0.9396 for D, outperforming traditional empirical models and other intelligent techniques such as Shallow Neural Network (SNN), Support Vector Regression (SVR), and Gradient Boosting Regression (GBR). This data-driven approach offers a robust and adaptable method for predicting the dynamic properties of granular soils across diverse conditions.
- Research Article
- 10.5075/epfl-thesis-4688
- Jan 1, 2010
Time series modeling and analysis is central to most financial and econometric data modeling. With increased globalization in trade, commerce and finance, national variables like gross domestic productivity (GDP) and unemployment rate, market variables like indices and stock prices and global variables like commodity prices are more tightly coupled than ever before. This translates to the use of multivariate or vector time series models and algorithms in analyzing and understanding the relationships that these variables share with each other. Autocorrelation is one of the fundamental aspects of time series modeling. However, traditional linear models, that arise from a strong observed autocorrelation in many financial and econometric time series data, are at times unable to capture the rather nonlinear relationship that characterizes many time series data. This necessitates the study of nonlinear models in analyzing such time series. The class of bilinear models is one of the simplest nonlinear models. These models are able to capture temporary erratic fluctuations that are common in many financial returns series and thus, are of tremendous interest in financial time series analysis. Another aspect of time series analysis is homoscedasticity versus heteroscedasticity. Many time series data, even after differencing, exhibit heteroscedasticity. Thus, it becomes important to incorporate this feature in the associated models. The class of conditional heteroscedastic autoregressive (ARCH) models and its variants form the primary backbone of conditional heteroscedastic time series models. Robustness is a highly underrated feature of most time series applications and models that are presently in use in the industry. With an ever increasing amount of information available for modeling, it is not uncommon for the data to have some aberrations within itself in terms of level shifts and the occasional large fluctuations. Conventional methods like the maximum likelihood and least squares are well known to be highly sensitive to such contaminations. Hence, it becomes important to use robust methods, especially in this age with high amounts of computing power readily available, to take into account such aberrations. While robustness and time series modeling have been vastly researched individually in the past, application of robust methods to estimate time series models is still quite open. The central goal of this thesis is the study of robust parameter estimation of some simple vector and nonlinear time series models. More precisely, we will briefly study some prominent linear and nonlinear models in the time series literature and apply the robust S-estimator in estimating parameters of some simple models like the vector autoregressive (VAR) model, the (0, 0, 1, 1) bilinear model and a simple conditional heteroscedastic bilinear model. In each case, we will look at the important aspect of stationarity of the model and analyze the asymptotic behavior of the S-estimator.
- Research Article
11
- 10.1109/icjece.2020.3043756
- Jan 1, 2021
- IEEE Canadian Journal of Electrical and Computer Engineering
Array signal processing and systems brings merit for synthesizing the optimal radiation pattern for steering the mainbeam in the direction of interest as well as suppressing interference signals in the direction of not interest while preserving the beamwidth. In this study, we have rigorously represented the deep neural network (DNN) modeling of the different geometries of the antenna arrays in three scenarios. In the first scenario, we briefly review and highlight some of the research methodologies and involved problems of the DNN modeling. The ultimate validation and verification of the DNN modeling have been fulfilled by estimating the DNN outputs and obtaining the radiation patterns of the different array geometries. In the second scenario, we have employed the three concepts of mean squared error (MSE), Shannon error entropy minimization (EEM), and bit error rate (BER) for the performance evaluation and validation of implementing the proposed DNN models for the different array geometries. The concept of Shannon error entropy can be extended to optimize the nonlinear and non-Gaussian problems. In the third scenario, the application of the DNN models in the direction of arrival (DoA) estimation has been analogously represented among the different array geometries.
- Research Article
6
- 10.1177/1687814021992181
- Feb 1, 2021
- Advances in Mechanical Engineering
Dynamic parameters of joints are indispensable factors affecting performance of machine tools. In order to obtain the stiffness and damping of sliding joints between the working platform and the machine tool body of the surface grinder, a new method of dynamic parameters identification is proposed that based on deep neural network (DNN) modeling. Firstly, the DNN model of dynamic parameters for working platform-machine tool body sliding joints is established by taking the stiffness and damping parameters as the input and the natural frequencies as the output. Secondly, the number of hidden layers in DNN topology is optimally selected in order to the optimal training results. Thirdly, combining the predicted results by DNN model with experimental results by modal test, the stiffness and damping are identified via cuckoo search algorithm. Finally, the relative error between the predicted and experimental results is less than 2.2%, which achieves extremely high prediction precision; and thereby indicates the feasibility and effectiveness of the proposed method.
- Research Article
1
- 10.61927/igmin197
- Jun 13, 2024
- IgMin Research
In materials science, the integrity and completeness of datasets are critical for robust predictive modeling. Unfortunately, material datasets frequently contain missing values due to factors such as measurement errors, data non-availability, or experimental limitations, which can significantly undermine the accuracy of property predictions. To tackle this challenge, we introduce an optimized K-Nearest Neighbors (KNN) imputation method, augmented with Deep Neural Network (DNN) modeling, to enhance the accuracy of predicting material properties. Our study compares the performance of our Enhanced KNN method against traditional imputation techniques—mean imputation and Multiple Imputation by Chained Equations (MICE). The results indicate that our Enhanced KNN method achieves a superior R² score of 0.973, which represents a significant improvement of 0.227 over Mean imputation, 0.141 over MICE, and 0.044 over KNN imputation. This enhancement not only boosts the data integrity but also preserves the statistical characteristics essential for reliable predictions in materials science.
- Research Article
5
- 10.1155/2022/2173005
- Mar 19, 2022
- Wireless Communications and Mobile Computing
This article is dedicated to discussing the online and offline mixed teaching evaluation of MOOC based on deep neural networks. Deep neural networks are an important means to solve various problems in various fields. It can evaluate the teaching attitude of teachers, the teaching content in the classroom, the teacher’s narrative ability, the teaching methods used by the teachers, and whether the teaching methods are rigorous. And it can train on a large number of datasets evaluated by students on a certain course and get results. This article first explains the advantages of the neural network model and explains the reasons for the emergence of MOOCs and the mixing with traditional classrooms. It also explains some deep neural network (DNN) models and algorithms, such as BP neural network model and algorithms. This model uses backpropagation. When there is an error between the output sample of the neural network and the target sample, the error can be backpropagated to adjust the threshold and weight to make the error reach the minimum. The algorithm steps include forward propagation and backpropagation and are substituted into the gradient descent method to obtain the weight change of the output layer and the hidden layer. It also explains the Gaussian model in DNNs. The given training data vector in the Gaussian mixture model and the configuration of GMM are used for expectation maximization training using an iterative algorithm, and the unsupervised clustering accuracy ACC is applied to evaluate its performance. Use pictures to describe the mixed-mode teaching mode in the MOOC environment. It is necessary to consider teaching practice conditions, time, location, curriculum resources, teaching methods and means, etc. It can cultivate students’ spatial imagination, engineering consciousness, creative design ability, drawing hand-made ability, and logical thinking abilities. It enables teachers to accept the fair and just evaluation of students. Finally, this article discusses the parallelization and optimization of GPU-based DNN models, splits the DNN models, and combines different models to calculate weight parameters. This article combines model training and data training in parallel to increase the processing speed under the same amount of data, increase the batch, increase the accuracy, and reduce the training shock. It can be concluded that its DNN model has greatly improved the training effect performance of the MOOC online and offline mixed course effect dataset. The calculation time is shortened, the convergence speed is accelerated, the accuracy rate is improved, and the acceleration ratio is increased, which compares with the same period of the previous year increase of more than 37.37%. The accuracy has increased, comparing with the same period of the previous year, an increase of more than 12.34%.
- Research Article
9
- 10.1177/0972150918811701
- Dec 25, 2018
- Global Business Review
High-frequency financial time series data have an ability to define market microstructure and are helpful in making rational real-time decisions. These data sets carry unique characteristics and properties which are not available in low-frequency data; with that high-frequency data also create more challenges and opportunities for econometric modelling and financial data analysis. So it is essential to know the features and the facts related to the high-frequency time series data. In this article, we provide the characteristics and stylized facts exhibited by the high-frequency financial time series data of the S&P CNX Nifty futures index. Stylized facts are mostly related to the empirical observed behaviours, distributional properties, autocorrelation function and seasonality of the high-frequency data. Also, it illustrates the importance of stationarity in financial time series analysis. The knowledge of such facts and concepts is helpful to establish better empirical models and to produce reliable forecasts.
- Conference Article
2
- 10.1109/ickecs56523.2022.10060240
- Dec 28, 2022
Deep learning and neural network methods can analyze and predict various information performance generated by financial markets. This kind of economic and financial analysis can predict and describe the trend, price, risk and other information of financial markets in a more detailed way. In order to solve the shortcomings of the existing economic and financial data analysis and research, this paper discusses the time series model function equation, convolutional neural network and economic and financial data analysis methods, and briefly discusses the test environment, data collection and indicators of the system designed in this paper. In addition, the functions of economic and financial data analysis system are designed and discussed. Finally, deep learning and neural network CNN, LSTM and RNN technologies are applied to the prediction and analysis of stock opening price, closing price, highest price and lowest price for experiments. The experimental data show that the average prediction accuracy of CNN for stock prices reaches 87.33%. The average accuracy of LSTM for stock price prediction reached 87.37%. The average prediction accuracy of RNN for stock prices reaches 97.36, which verifies that the algorithm in this paper has a good performance effect.
- Research Article
1
- 10.1016/j.eswa.2024.124201
- May 14, 2024
- Expert Systems With Applications
A novel robust black-box fingerprinting scheme for deep classification neural networks
- Conference Article
- 10.1109/iaecst54258.2021.9695899
- Dec 10, 2021
In view of the lack of single data of station area statistical line loss rate in the collected station area line loss statistical table, which makes it impossible to judge whether the station area has abnormal line loss, a prediction filling method of station area statistical line loss rate based on deep neural network (DNN) is proposed. Firstly, through deep neural network modeling and model training, the missing single data after prediction is obtained; secondly, compared with other regression algorithms, its performance is better than other models; finally, compared with the error, it has a certain prediction accuracy. This method can effectively predict the statistical line loss rate of single data missing station area.
- Research Article
12
- 10.1016/j.asr.2023.08.057
- Sep 4, 2023
- Advances in Space Research
A comparative evaluation of deep convolutional neural network and deep neural network-based land use/land cover classifications of mining regions using fused multi-sensor satellite data
- Research Article
- 10.15587/2312-8372.2019.183868
- Oct 31, 2019
- Technology audit and production reserves
The paper discovers certain aspects of financial time series, in particular, modeling of return on assets. The object of research is a system of indicators for analyzing the returns of financial time series. There is a key feature that distinguishes the analysis of financial time series from the analysis of other time series, as financial theory and its empirical time series contain an element of uncertainty. As a result of this additional uncertainty, statistical theory and its methods and models play an important role in the analysis of financial time series.One of the most problematic places is the use of asset prices and their volatility in the analysis and forecasting of financial time series, which is false because such series contain an element of uncertainty. Therefore, the so-called return on financial assets and instruments should be used in tasks of this type.The paper deals with the types of return on financial assets that can be used in mathematical modeling and forecasting of stock indices. Static methods are used to eliminate the disadvantages of using financial asset prices in the analysis and forecasting of financial time series. The empirical properties of financial time series are examined using the PFTS (First Stock Trading System) and S&P 500 indices.A comprehensive system of indicators of time series analysis of financial assets is obtained. The proposed system involves the use of numerous methods of calculating the profitability (return) of assets in order to determine significant statistical characteristics of the data. Compared to similarly known methods of using prices (rather than profitability) of assets, this provides a key advantage that allows elements of uncertainty in financial and economic data.
- New
- Research Article
- 10.1016/j.bdr.2025.100570
- Nov 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100569
- Oct 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100534
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100540
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100553
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100552
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100539
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100557
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100551
- Aug 1, 2025
- Big Data Research
- Research Article
- 10.1016/j.bdr.2025.100542
- Aug 1, 2025
- Big Data Research
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.