Implementations of data mining techniques in stock market forecasting: a generic framework
This paper presents a categorical review of existing literature in the field of share market forecasting. Share market forecasting has been done in econometrics and statistics for quite some time and the emergence of artificial intelligence and data mining are giving new dimensions to it. Data mining is being used in modern day to mine terabytes of data to find new and useful patterns from existing data for the betterment of the society. One of the application of data is to mine the data regarding the stocks in the public domain and help investors formulate a decision. We have conducted an extensive review of existing literature regarding the use of data mining techniques on the domain of share market forecasting and propose a new framework for real time stock price forecasting.
- Conference Article
11
- 10.18287/1613-0073-2017-1903-115-121
- Jan 1, 2017
The article describes the basic principles and methods of Data mining and Process mining, their similarities and differences. The authors examine the research in Educational Data Mining field, associated with the use of Data mining techniques in education, give examples of problems to be solved with the use of Data mining and Process mining techniques in the area of traditional and e-learning, describe the possibilities and limitations of different methods. Some examples of special software for Data mining and Process mining are presented. A review of major scientific conferences and journals devoted to the research in Educational Data Mining is made.
- Book Chapter
2
- 10.1007/978-3-642-12145-6_47
- Jan 1, 2010
Since people cannot predict accurately what will happen in the next moment, forecasting what will occur mostly has been one challenging and concerned issues in many areas especially in stock market forecasting. Whenever reasonable predictions with less bias are produced by investors, great profit will be made. As the emergence of artificial intelligence (AI) algorithms has arisen in recent years, it has played an important role to help people forecast the future. In the stock market, many forecasting models were advanced by academy researchers to forecast stock price such as time series, technical analysis and fuzzy time-series models. However, there are some drawbacks in the past models: (1) strict statistical assumptions are required; (2) objective human judgments are involved in forecasting procedure; and (3) a proper threshold is not easy to be found. For the reasons above, a novel forecasting model based on variation and trend pattern is proposed in this paper. To verify the forecasting performance of the proposed model, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) of the year 2000, is used as experimental dataset and two past fuzzy time-series models are used as comparison models. The comparison results have shown that the proposed model outperforms the listed models in accuracy and stability.
- Research Article
- 10.46324/pmp2403292
- Dec 1, 2024
- Postmodernism Problems
Artificial intelligence has entered the practice of public relations professionals and is on its way to fundamentally changing the way the PR industry works. Many professionals are already using AI applications in their work, and a large part of the public remains unaware of this fact. Very often, both parties are unaware of the history and emergence of artificial intelligence, the understanding of which is the basis for understanding its capabilities and the risks of its use. This article attempts to gather basic information on the emergence of AI and its development by the summer of 2024.Artificial intelligence
- Research Article
77
- 10.1016/j.eng.2017.04.002
- Aug 1, 2017
- Engineering
The Use of Data Mining Techniques in Rockburst Risk Assessment
- Research Article
4
- 10.1504/ijebr.2018.10012858
- Jan 1, 2018
- International Journal of Economics and Business Research
Globalisation and recent financial crises have increased the pressures faced by banks operating in emerging economies to maintain their competitive advantages and to insure sustainability. Recent technological advancement paved the way for the use of big data to assist companies in the decision making process. One of the methods exploited by management to make advantage of the huge amount of available information is data mining. This study aims at examining the employment of data mining techniques in the banking sector in an interesting research setting, namely: Jordan. The main objective of this study is to explore the perceptions regarding the use of data mining techniques as a strategic management tool in the banking sector from accounting and finance perspective. Toward this end, a questionnaire is designed and distributed to a sample of 76 banking employees in Jordan who are directly involved in the banking decision support systems units. Results showed that the use of data mining techniques is positively significant in data exchange with internal environment as well as with the external environment of the bank. In addition, results reported the significance impact of data mining techniques in supporting management decision making process in the areas of accounting and finance.
- Research Article
54
- 10.1016/s0957-4174(03)00099-x
- Jun 14, 2003
- Expert Systems with Applications
Intelligent policy recommendations on enterprise resource planning by the use of agent technology and data mining techniques
- Conference Article
2
- 10.1109/icoei56765.2023.10125607
- Apr 11, 2023
The stock market price has increased significantly in the current period, attracting firm shareholders. Shareholders and investors both express a keen interest in stock market analysis and forecasting, which finally leads investors and other speculators to contribute to the company's financial success. A favorable prognosis could bring about significant advantages. In today's world, more improved models, various perspectives, and trend analysis tools are developed over time. Nevertheless, the most efficient analytical framework is Long Short-Term Memory, one of the algorithms of Recurrent Neural Networks. This algorithm can be used to produce precise results when the right parameters are used. To accomplish this, a dataset of stock market data must be compiled, and all stock closing prices must be measured using a variety of hidden layers and units. To improve accuracy, proposed work uses SGD optimizer and hyperparameter tuning. The algorithm is evaluated using root mean squared error. As a result of this methodology, historical datasets can be used to forecast the stock market more accurately.
- Research Article
- 10.1007/s10115-010-0339-3
- Sep 1, 2010
- Knowledge and Information Systems
Data mining has emerged as one of the most dynamic and lively areas in IT research and development. Many data mining techniques and algorithms have been used in a diversity of application areas in banking, finance, medicine, education, business, and marketing analysis, etc. They have been found to be able to effectively discover meaningful patterns in large volumes of data in these application areas, and these achievements are due directly and indirectly to the results of many years of effort in the R&D of new data mining methodologies, new processes in knowledge discovery, new theory on data mining foundations and new insights into the use of data mining techniques in emerging domains. By promoting novel, high-quality research and by developing innovative solutions to challenging real-world problems, we expect data mining to continue to advance and create a big impact in the world. It is this belief that prompted us to organize a two-day IEEEDataMining Forum in May, 2008 at The Hong Kong Polytechnic University, Hung Hom, Hong Kong. For the Forum, we invited a select number of the world’s leading experts in data mining from North America, Europe, Australia, and Asia to give invited talks and participate in panel discussions. The forum was attended by academics and practitioners alike and the presentations on original, cutting-edge, and state-of-the-art progress in data mining have provided insights for the audience and the speakers to further advance the area. This special issue grows out of the presentations at the Forum. Some speakers were invited to submit a paper on the topic they presented. After a careful review process, four papers were finally selected. They cover the research areas in peer-to-peer networks, optimization-based data mining, graph clustering, and semi-supervised learning. The first paper in this special issue is “A local asynchronous distributed privacy preserving feature selection algorithm for large peer-to-peer networks” by Kamalika Das, Kanishka Bhaduri and Hillol Kargupta. The paper describes a local-distributed privacy-preserving
- Book Chapter
- 10.4018/978-1-60566-010-3.ch242
- Jan 1, 2009
Despite its benefits in various areas (e.g., business, medical analysis, scientific data analysis, etc), the use of data mining techniques can also result in new threats to privacy and information security. The problem is not data mining itself, but the way data mining is done. “Data mining results rarely violate privacy, as they generally reveal high-level knowledge rather than disclosing instances of data” (Vaidya & Clifton, 2003). However, the concern among privacy advocates is well founded, as bringing data together to support data mining projects makes misuse easier. Thus, in the absence of adequate safeguards, the use of data mining can jeopardize the privacy and autonomy of individuals. Privacy-preserving data mining (PPDM) cannot simply be addressed by restricting data collection or even by restricting the secondary use of information technology (Brankovic & V. Estivill-Castro, 1999). Moreover, there is no exact solution that resolves privacy preservation in data mining. In some applications, solutions for PPDM problems might meet privacy requirements and provide valid data mining results (Oliveira & Zaïane, 2004b). We have witnessed three major landmarks that characterize the progress and success of this new research area: the conceptive landmark, the deployment landmark, and the prospective landmark. The Conceptive landmark characterizes the period in which central figures in the community, such as O’Leary (1995), Piatetsky-Shapiro (1995), and others (Klösgen, 1995; Clifton & Marks, 1996), investigated the success of knowledge discovery and some of the important areas where it can conflict with privacy concerns. The key finding was that knowledge discovery can open new threats to informational privacy and information security if not done or used properly. The Deployment landmark is the current period in which an increasing number of PPDM techniques have been developed and have been published in refereed conferences. The information available today is spread over countless papers and conference proceedings. The results achieved in the last years are promising and suggest that PPDM will achieve the goals that have been set for it. The Prospective landmark is a new period in which directed efforts toward standardization occur. At this stage, there is no consensus on privacy principles, policies, and requirements as a foundation for the development and deployment of new PPDM techniques. The excessive number of techniques is leading to confusion among developers, practitioners, and others interested in this technology. One of the most important challenges in PPDM now is to establish the groundwork for further research and development in this area.
- Book Chapter
- 10.9734/bpi/mono/978-93-5547-265-6/ch0
- Dec 21, 2021
Data Mining refers to a set of methods applicable to large and complex databases to eliminate the randomness and discover the hidden pattern. Datamining (DM), also known as knowledge discovery from databases (KDD), is the extraction of new knowledge from huge databases. Data mining involves the use of sophisticated data analysis tools to discover previously unknown, valid patterns and relationships in large data sets. Data mining tools can forecast the future trends and activities to support the decision of people. The scope of datamining is associated with Uncovering trends and patterns are a great power for the businesses of all sectors and industries. Modern intrusion detection applications are confronted with a variety of issues. These applications must be reliable, extensible, manageable, and minimal in maintenance costs. Data mining-based intrusion detection systems (IDSs) have shown high accuracy, good generalisation to novel types of intrusion, and stable behaviour in a changing environment in recent years. The number of hidden layers in various neural network topologies is evaluated in order to discover the best neural network. The technique of attempting to discover instances of network attacks by comparing current behaviour to the expected actions of an intruder is known as misuse detection. Artificial neural networks have the ability to detect and classify network activity using data that is limited, incomplete, and nonlinear. The main purpose of this work is to identify privacy and security concerns among cloud computing participants and consumers in a distributed environment. Techniques like Machine Learning, Natural Language Processing (NLP), and Data Mining are combined to automatically identify and uncover patterns from many sorts of materials. Predictive analytics is capable of dealing with both continuous and discontinuous changes. Classification, prediction, and to some extent, affinity analysis constitute the analytical methods employed in predictive analytics. The contemporary study in text or document mining is focusing on syntactic components and the semantic environment. In order to accomplish this, and with the motivation gained from our previous research contributions, we investigated a mining model to classify documents based on the Order of Context, Concept, and Semantic Relations (OCCSR). The use of data mining techniques based on Cloud computing will enable users to retrieve meaningful information from virtually integrated data warehouses, lowering infrastructure and storage costs. Data mining can extract useful and potentially useful information from the cloud. Big Data is typically defined by three characteristics known as the 3Vs (Volume, Velocity and Variety). The surveys approaches, environments, and technologies in key areas for Big Data analytics capabilities and discusses how they aid in the development of analytics solutions for Clouds. The clustering technique belongs to an unsupervised learning and it is used to discover a new set of categories. Grid-based clustering has the shortest processing time, which is typically determined by the size of the grid rather than the data. We compare the performance of three clustering algorithms: hierarchical clustering, density-based clustering, and K Means clustering. The majority of current approaches to detecting misuse involve the use of rule-based expert systems to identify indicators of known attacks. We provide a brief overview of the use of various Artificial Intelligence techniques and their advancements in the design, development, and application of Intrusion Detection Systems (IDS) for protecting computer and communication networks from intruders. The goal of Knowledge Discovery in Data (KDD) is to extract information that is not obvious by using careful and detailed analysis and interpretation. To drive decisions and actions, analytics employs KDD, data mining, text mining, statistical and quantitative analysis, explanatory and predictive models, and advanced and interactive visualisation techniques.
- Conference Article
5
- 10.1109/icccnt54827.2022.9984410
- Oct 3, 2022
Big data analysis is now widely used as a result of the Internet's quick development. Researchers have focused a lot of attention on data mining because it is essential for obtaining potentially valuable information from big data. Python is a popular programming language used in data mining. Python is regarded as an indispensable tool for data mining because of its robust scientific calculation capabilities and rich database. There have been many consumer purchasing behaviour studies that have been presented and applied to actual issues. In-depth consumer behaviour analysis will likely benefit from the use of data mining techniques. Both benefits and drawbacks of the data mining approach exist, though. To mine databases effectively, it is crucial to choose the right techniques. This paper aims to apply several methods, such as pincer search-based association rule generation, to enhance conventional data mining analysis. The a priori algorithm uses a bottom-up, breadth-first search approach. The computation begins with the lowest set of frequent item sets and advances until it reaches the greatest frequent item set. The greatest size of the frequent item collection is equivalent to the number of database passes. The algorithm must go through numerous iterations as a result of any one of the frequent item sets getting longer, which lowers performance. To overcome this challenge, this paper aims to propose pincer searching-based association rule generation based on big data mining and classification algorithms for effective accuracy. The wholesale data market is used for simulation purposes.
- Book Chapter
- 10.58532/v3biai12p7ch3
- Mar 5, 2024
Journalism and mass media complement each other. With the superficial time, the ever-new coordination of mass media and technology has brought about a dynamic change in journalism and mass media. This change is not only reflected in the writing on any topic, problem or event, but the way of its presentation has also decidedly changed. Artificial Intelligence is also the latest information presentation technology, which on one hand has made news presentation accessible on various topics, on the contrary, the basis of its inherent capabilities; it has raised questions on human intellectual capacity and credibility. The maneuver of robot news anchors for presentation of news by many national and international media channels is just one example of this. Apart from information communication, in the field of social communication, there remains a fine line between realities and virtually, which requires additional skills to know and understand. The present chapter “Future of Media after emergence of Artificial Intelligence: Issues and Challenges” is an attempt to analyze the issues and challenges arising on the topic of use of Artificial Intelligence in future media.
- Research Article
4
- 10.3389/frma.2024.1522423
- Jan 17, 2025
- Frontiers in research metrics and analytics
The emergence of artificial intelligence (AI) has revolutionised higher education teaching and learning. AI has the power to analyse large amounts of data and make intelligent predictions thus changing the whole teaching and learning processes. However, such a rise has led to institutions questioning the morality of these applications. The changes have left librarians and educators worried about the major ethical questions surrounding privacy, equality of information, protection of intellectual property, cheating, misinformation and job security. Libraries have always been concerned about ethics and many go out of their way to make sure communities are educated about the ethical question. However, the emergence of artificial intelligence has caught them unaware. This research investigates the preparedness of higher education librarians to support the ethical use of information within the higher and tertiary education fraternity. A qualitative approach was used for this study. Interviews were done with thirty purposively selected librarians and academics from universities in Zimbabwe. Findings indicated that many university libraries in Zimbabwe are still at the adoption stage of artificial intelligence. It was also found that institutions and libraries are not yet prepared for AI use and are still crafting policies on the use of AI. Libraries seem prepared to adopt AI. They are also prepared to offer training on how to protect intellectual property but have serious challenges in issues of transparency, data security, plagiarism detection and concerns about job losses. However, with no major ethical policies having been crafted on AI use, it becomes challenging for libraries to full adopt its usage.
- Research Article
- 10.23874/amber/2022/v13/i12/219192
- Oct 1, 2022
- AMBER – ABBS Management Business and Entrepreneurship Review
In this modern world, perspectives and practices of doing businesses changes often in order to sustain, grow and compete in the dynamic business environment. On going through this aspect, each and every business practices has its major importance, through which one can be able to rule the business world similarly supply chain management of a business plays a major and crucial role in the success of the business where it is directly or indirectly meets the customers’ expectations and satisfaction. Ironically, these supply chain management plays a crucial role and acts as a back bone for E-Commerce Businesses. In order to maintain the momentum of competitiveness, every businessman started to adapt the technological transformation into their business and its related core business processes. One among them are the emergence of Artificial Intelligence (AI), a hybrid technology which could bring and develop a lot of sustainability and competitiveness in to the business but it is very important to identify the core competence and that needs to be matched with the current business practices according to changing business needs then only one can be able to feel the attainability and adaptability. Here, in this conceptual paper, the researcher tries to find out the major competency on the adaptation of Artificial Intelligence (AI) in to the core supply chain management process of a business. By doing such so, the researcher can find out what kind of technological impact that could be produced in the supply chain management could be portrayed for future implications on various dimensions of business and its related practices.
- Research Article
1
- 10.37591/joaira.v7i2.2538
- Aug 1, 2020
- Journal of Artificial Intelligence Research & Advances
With the emergence of Artificial Intelligence in this technocrat world has led to the transformation in the world of technology. With the emergence of Artificial Intelligence it has led to different more technologies such as voice assistants which work for humans in order to make it easy in the workplace but in order to make it more of use and understandable. It has been proved to best suited for study of mathematical and computational modeling of various aspects of language. It widely includes a spoken language system which mainly integrates with speech and natural language. Natural Language Processing mainly focuses on the areas like Research and Language Text type of applications. It is being mentioned that Natural language is viewed in two terminologies as Rule based and Statistical based. Natural Language Processing has the base on heuristic search stemming with a tremendous root cause. Uses majority of English language rules and processing regulation for speechtual. Keywords: Artificial Intelligence, Deep Learning, machine learning, Techniques of Natural Language Processing, Natural Language Processing, Lexical Input, Natural Language Generation, Natural Language Understanding. Cite this Article: Sapna Sharma. Natural Language Processing in AI: Language Interpreter. Journal of Artificial Intelligence Research & Advances. 2020; 7(2): 21–27p.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.