Predictive Modelling in Flood Area Using Artificial Intelligence and Machine Learning Methods: Study Case West Java Province
Predictive Modelling in Flood Area Using Artificial Intelligence and Machine Learning Methods: Study Case West Java Province
- Research Article
- 10.1007/s10668-025-06186-4
- May 7, 2025
- Environment, Development and Sustainability
Diatom indices are used to assess the quality of aquatic plants in sustainable river ecosystems. The traditional assessment of diatom indices involves complicated and lengthy process steps. Today, artificial intelligence-based modelling plays a key role in overcoming this complexity. The aim of this work is to model selected diatom indices Biological Diatom Index (BDI), Trophic Diatom Index (TDI) and General Diatom Index (GDI) based on the physicochemical structure of river ecosystems using artificial intelligence and machine learning methods. The application part of the study used surface water variables from rivers monitored by 5 different stations for 24 months as a data set. Traditional analyses were compared with artificial intelligence and machine learning methods using the MATLAB programme. Different algorithms were considered, including Neural Network/Multilayer Perceptron (MLP), Support Vector Machine (SVM), Linear Regression (LR), Gaussian Process Regression (GPR), Decision Tree and Levenberg-Marquardt (LM) approach. To evaluate the quality of the models, the coefficient of determination (R2), root mean square error squared (RMSE) and mean absolute percentage error (MAPE) were compared. The R2 values of the Levenberg-Marquardt model, which gave the best prediction results for BDI, TDI and GDI, were found to be Validation; 0.7691, Training; 0.9620 Testing; 0.8613, Validation 0.9273, Training; 0.9303, Testing; 0.9199, Validation; 0.9273, Training; 0.9303, Testing; 0.9199, respectively. Levenberg Marquardt efficiently predicted Diatom index results accurately with high precision. Our results show that artificial intelligence and machine learning methods are highly efficient tools for the prediction of diatom indices. A time-efficient and labour-saving application in sustainable ecosystem management was successfully demonstrated.
- Book Chapter
- 10.1007/978-3-030-90708-2_6
- Jan 1, 2022
Machine learning (ML) and artificial intelligence (AI) methods are some of the latest advancements in the field of computing. Among these methods, there are nature-inspired techniques such as deep learning and deep neural networks, which are inspired from the neural networks of the human brain. These methods are applicable towards the security of networks and network-connected machines from malware, intrusion, and other cyberattacks. For example, in dealing with modern cyber threats, some standard ML and AI methods that can be useful are malicious code recognition for malware analysis, object-based modeling to classify security threats, and heuristic rule systems for intrusion detection. In this way, ML and AI can play a key role in cyber threat detection and prevention. Due to the large amounts of data packets passing through a network, processing and parsing through that data to find malware, intrusion, or other malicious code and files can be overwhelmingly difficult for humans. Machine learning models can be trained to detect malicious patterns in data or files and can thus be used to automatically detect malware or intrusive activity. Additionally, humans are limited in terms of the amount of time or duration that they can spend, but once programmed, a machine learning model can continue running and operating nonstop to detect and prevent malicious code and files from entering a network-connected system. This can reduce human effort and minimize human error by automating the computing required to detect and thwart cyberattacks. This paper surveys and reviews different AI and ML methods that have been used in past literature for cybersecurity applications. The goal of this work is to aid cybersecurity researchers and professionals on how to employ AI and ML techniques for cybersecurity applications, such as malicious code detection, intrusion detection, and cyber threat analysis.
- Research Article
78
- 10.1093/bib/bbaa369
- Jan 6, 2021
- Briefings in Bioinformatics
ObjectiveDevelopment of novel informatics methods focused on improving pregnancy outcomes remains an active area of research. The purpose of this study is to systematically review the ways that artificial intelligence (AI) and machine learning (ML), including deep learning (DL), methodologies can inform patient care during pregnancy and improve outcomes.Materials and methodsWe searched English articles on EMBASE, PubMed and SCOPUS. Search terms included ML, AI, pregnancy and informatics. We included research articles and book chapters, excluding conference papers, editorials and notes.ResultsWe identified 127 distinct studies from our queries that were relevant to our topic and included in the review. We found that supervised learning methods were more popular (n = 69) than unsupervised methods (n = 9). Popular methods included support vector machines (n = 30), artificial neural networks (n = 22), regression analysis (n = 17) and random forests (n = 16). Methods such as DL are beginning to gain traction (n = 13). Common areas within the pregnancy domain where AI and ML methods were used the most include prenatal care (e.g. fetal anomalies, placental functioning) (n = 73); perinatal care, birth and delivery (n = 20); and preterm birth (n = 13). Efforts to translate AI into clinical care include clinical decision support systems (n = 24) and mobile health applications (n = 9).ConclusionsOverall, we found that ML and AI methods are being employed to optimize pregnancy outcomes, including modern DL methods (n = 13). Future research should focus on less-studied pregnancy domain areas, including postnatal and postpartum care (n = 2). Also, more work on clinical adoption of AI methods and the ethical implications of such adoption is needed.
- Research Article
125
- 10.1007/s11030-021-10326-z
- Oct 23, 2021
- Molecular Diversity
The global spread of COVID-19 has raised the importance of pharmaceutical drug development as intractable and hot research. Developing new drug molecules to overcome any disease is a costly and lengthy process, but the process continues uninterrupted. The critical point to consider the drug design is to use the available data resources and to find new and novel leads. Once the drug target is identified, several interdisciplinary areas work together with artificial intelligence (AI) and machine learning (ML) methods to get enriched drugs. These AI and ML methods are applied in every step of the computer-aided drug design, and integrating these AI and ML methods results in a high success rate of hit compounds. In addition, this AI and ML integration with high-dimension data and its powerful capacity have taken a step forward. Clinical trials output prediction through the AI/ML integrated models could further decrease the clinical trials cost by also improving the success rate. Through this review, we discuss the backend of AI and ML methods in supporting the computer-aided drug design, along with its challenge and opportunity for the pharmaceutical industry.Graphic abstractFrom the available information or data, the AI and ML based prediction for the high throughput virtual screening. After this integration of AI and ML, the success rate of hit identification has gained a momentum with huge success by providing novel drugs.
- Research Article
2
- 10.2139/ssrn.3788977
- Jan 1, 2021
- SSRN Electronic Journal
Corporate Social Responsibility (CSR) became an ever more relevant theme for firms in their dealings with investors, customers, and the public at large. CSR is also a domain, where problems of misconduct and non-compliance may occur. Investors and Activists are interested in the CSR-related compliance for practical reasons, researchers are interested in whether compliance in the domain of CSR can be predicted or detected. Structurally, this problem is similar to detecting non-compliance in the domain of financial regulation (fraud detection), a standard application of methods of artificial intelligence and machine learning, which are already applied successfully in the domain of legal /financial compliance. A yet unanswered question is, whether such methods can be applied in the domain of CSR, too. This conceptual paper outlines possible strategies for applying methods of artificial intelligence and machine learning to the domain of CSR. The paper starts out from elaborating differences between compliance in the domain of CSR, compared to legal and financial compliance. A crucial difference is that in the case of legal and financial compliance, non-compliance is defined in legal terms, and can, in principle, be recognized objectively, with official institutions doing the classification on which data sets are based. In the domain of CSR, compliance is more difficult to define and, consequentially, much more difficult to detect. This paper proposes and illustrates options for harnessing the potential of methods of machine learning and artificial intelligence for issues of CSR-related non-compliance.
- Research Article
19
- 10.3389/fpsyt.2021.738466
- Sep 20, 2021
- Frontiers in Psychiatry
Introduction: Electronic health records (EHR) and administrative healthcare data (AHD) are frequently used in geriatric mental health research to answer various health research questions. However, there is an increasing amount and complexity of data available that may lend itself to alternative analytic approaches using machine learning (ML) or artificial intelligence (AI) methods. We performed a systematic review of the current application of ML or AI approaches to the analysis of EHR and AHD in geriatric mental health.Methods: We searched MEDLINE, Embase, and PsycINFO to identify potential studies. We included all articles that used ML or AI methods on topics related to geriatric mental health utilizing EHR or AHD data. We assessed study quality either by Prediction model Risk OF Bias ASsessment Tool (PROBAST) or Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) checklist.Results: We initially identified 391 articles through an electronic database and reference search, and 21 articles met inclusion criteria. Among the selected studies, EHR was the most used data type, and the datasets were mainly structured. A variety of ML and AI methods were used, with prediction or classification being the main application of ML or AI with the random forest as the most common ML technique. Dementia was the most common mental health condition observed. The relative advantages of ML or AI techniques compared to biostatistical methods were generally not assessed. Only in three studies, low risk of bias (ROB) was observed according to all the PROBAST domains but in none according to QUADAS-2 domains. The quality of study reporting could be further improved.Conclusion: There are currently relatively few studies using ML and AI in geriatric mental health research using EHR and AHD methods, although this field is expanding. Aside from dementia, there are few studies of other geriatric mental health conditions. The lack of consistent information in the selected studies precludes precise comparisons between them. Improving the quality of reporting of ML and AI work in the future would help improve research in the field. Other courses of improvement include using common data models to collect/organize data, and common datasets for ML model validation.
- Research Article
2
- 10.18668/ng.2019.02.07
- Feb 1, 2019
- Nafta-Gaz
The paper presents contemporary trends in artificial intelligence and machine learning methods, which include, among others, artificial neural networks, decision trees, fuzzy logic systems and others. Computational intelligence methods are part of the field of research on artificial intelligence. Selected methods of computational intelligence were used to build medium-term monthly forecasts of natural gas demand for Poland. The accuracy of forecasts obtained using the artificial neural network and the decision tree with classical linear regression was compared based on historical data from a ten-year period. The explanatory variables were: gas consumption in other EU countries, average monthly temperature, industrial production, wages in the economy and the price of natural gas. Forecasting was carried out in five stages differing in the selection of the learning and testing sample, the use of data preprocessing and the elimination of some variables. For raw data and a random training set, the highest accuracy was achieved by linear regression. For the preprocessed data and the random learning set, the decision tree was the most accurate. The forecast obtained on the basis of the first eight years and tested on the last two was most accurately created by regression, but only slightly better than with the decision tree or neural network, regardless of data normalization and elimination of collinear variables. Machine learning methods showed good accuracy of monthly gas consumption forecasts, but nevertheless slightly gave way to classical linear regression, due to too narrow set of explanatory variables. Machine learning methods will be able to show higher effectiveness as the number of data increases and the set of potential explanatory variables is expanded. In the sea of data, machine learning methods are able to create prognostic models more effectively, without the analyst’s laborious involvement in data preparation and multi-stage analysis. They will also allow for the frequent updating of the form of prognostic models even after each addition of new data into the database.
- Abstract
1
- 10.1136/ijgc-2023-esgo.778
- Sep 1, 2023
- International Journal of Gynecologic Cancer
Introduction/BackgroundEndometrial carcinoma (EC) is the most common gynaecological malignancy in the developed world. Currently, no valid non-invasive diagnostic or prognostic methods exist, making diagnosis and treatment rely on histopathological and...
- Book Chapter
12
- 10.1016/b978-0-12-817133-2.00002-1
- Jan 1, 2020
- Artificial Intelligence in Precision Health
Chapter 2 - Artificial intelligence methods in computer-aided diagnostic tools and decision support analytics for clinical informatics
- Research Article
8
- 10.3390/jpm13060962
- Jun 7, 2023
- Journal of Personalized Medicine
In the past vicennium, several artificial intelligence (AI) and machine learning (ML) models have been developed to assist in medical diagnosis, decision making, and design of treatment protocols. The number of active pathologists in Poland is low, prolonging tumor patients' diagnosis and treatment journey. Hence, applying AI and ML may aid in this process. Therefore, our study aims to investigate the knowledge of using AI and ML methods in the clinical field in pathologists in Poland. To our knowledge, no similar study has been conducted. We conducted a cross-sectional study targeting pathologists in Poland from June to July 2022. The questionnaire included self-reported information on AI or ML knowledge, experience, specialization, personal thoughts, and level of agreement with different aspects of AI and ML in medical diagnosis. Data were analyzed using IBM® SPSS® Statistics v.26, PQStat Software v.1.8.2.238, and RStudio Build 351. Overall, 68 pathologists in Poland participated in our study. Their average age and years of experience were 38.92 ± 8.88 and 12.78 ± 9.48 years, respectively. Approximately 42% used AI or ML methods, which showed a significant difference in the knowledge gap between those who never used it (OR = 17.9, 95% CI = 3.57-89.79, p < 0.001). Additionally, users of AI had higher odds of reporting satisfaction with the speed of AI in the medical diagnosis process (OR = 4.66, 95% CI = 1.05-20.78, p = 0.043). Finally, significant differences (p = 0.003) were observed in determining the liability for legal issues used by AI and ML methods. Most pathologists in this study did not use AI or ML models, highlighting the importance of increasing awareness and educational programs regarding applying AI and ML in medical diagnosis.
- Conference Article
7
- 10.23919/cycon49761.2020.9131724
- May 1, 2020
Within the next decade, the need for automation, intelligent data handling and pre-processing is expected to increase in order to cope with the vast amount of information generated by a heavily connected and digitalised world. Over the past decades, modern computer networks, infrastructures and digital devices have grown in both complexity and interconnectivity. Cyber security personnel protecting these assets have been confronted with increasing attack surfaces and advancing attack patterns. In order to manage this, cyber defence methods began to rely on automation and (artificial) intelligence supporting the work of humans. However, machine learning (ML) and artificial intelligence (AI) supported methods have not only been integrated in network monitoring and endpoint security products but are almost omnipresent in any application involving constant monitoring, complex or large volumes of data. Intelligent IDS, automated cyber defence, network monitoring and surveillance as well as secure software development and orchestration are all examples of assets that are reliant on ML and automation. These applications are of considerable interest to malicious actors due to their importance to society. Furthermore, ML and AI methods are also used in audio-visual systems utilised by digital assistants, autonomous vehicles, face-recognition applications and many others. Successful attack vectors targeting the AI of audio-visual systems have already been reported. These attacks range from requiring little technical knowledge to complex attacks hijacking the underlying AI.With the increasing dependence of society on ML and AI, we must prepare for the next generation of cyber attacks being directed against these areas. Attacking a system through its learning and automation methods allows attackers to severely damage the system, while at the same time allowing them to operate covertly. The combination of being inherently hidden through the manipulation made, its devastating impact and the wide unawareness of AI and ML vulnerabilities make attack vectors against AI and ML highly favourable for malicious operators. Furthermore, AI systems tend to be difficult to analyse post-incident as well as to monitor during operations. Discriminating a compromised from an uncompromised AI in real-time is still considered difficult.In this paper, we report on the state of the art of attack patterns directed against AI and ML methods. We derive and discuss the attack surface of prominent learning mechanisms utilised in AI systems. We conclude with an analysis of the implications of AI and ML attacks for the next decade of cyber conflicts as well as mitigations strategies and their limitations.
- Book Chapter
- 10.1007/978-3-030-79353-1_10
- Jan 1, 2022
The variety of the artificial intelligence and machine learning methods are applied for data analysis in various areas, including the data-rich healthcare domain. However, aiming to improve health care efficiency and use the captured information to improve treatment methods is often hampered by poor quality of medical data collections, as high percent of health data are unstructured and preserved in different systems and formats. In addition, it is not always agreed which methods of artificial intelligence and machine learning perform better in different problem areas, and which computer tools could make their application more convenient and flexible. The chapter provides essential characteristics of methods, traditionally applied in statistics, such as regression analysis, as well as their advanced modifications of logit, probit models, K-means, and Neural networks. The performance of the methods, their analytical power and relevance to the healthcare application domain is illustrated by brief experimental computations for investigation of stroke patient database with the help of several readily available software tools, such as MS Excel, Statistica, Matlab, Google BigQuery ML.
- Research Article
- 10.1186/s43094-025-00856-w
- Aug 1, 2025
- Future Journal of Pharmaceutical Sciences
Background Recently, the need for artificial intelligence (AI) and machine learning (ML) methods in drug development and research is gaining high concern and more grounds. Moreover, providing pharmaceutical and related schools with non-commercial, free-to-use programming languages, software and tools is becoming an unavoidable need. The R programming language can be easily used, through the correct and simplified codes and packages, in conducting unsupervised ML methods, such as principal component analysis (PCA) and hierarchical clustering analysis (HCA), after calculating relevant descriptors of drugs and molecules. Objective The objective of this study was to assess the enhancement of non-computer sciences-based students’ perception of the use of machine learning methods such as PCA and HCA using R-programming in drug formulation. Results Undergraduate students were taught to use R program to derive PCA distinguishable plots such as score, loading and scree, in addition to HCA dendrograms, in the context of developing new pharmaceutical formulations. Surveys conducted pre- and post-teaching the course proved that implementation of such ML methods can help in better understanding and exploring the data, in order to derive meaningful conclusions, and make informed decisions that help develop pharmaceutical formulations of premium quality, with minimal resources consumption. Conclusion We hereby report the easy use of R-programming in applications and activities that introduce undergraduate Pharmaceutical Engineering and Biotechnology students to ML methods. Student surveys showed better student satisfaction and understanding of AI applications in solving pharmaceutical problems. We claim that these students and early_career researchers, who are non-specialists in computer science, can utilize R-programming to perform important pharmaceutical applications through the step-by-step guide and codes provided in this article. Graphical Abstract
- Research Article
5
- 10.1146/annurev-bioeng-110220-030247
- Feb 28, 2023
- Annual Review of Biomedical Engineering
Artificial intelligence (AI) and machine learning (ML) methods are currently widely employed in medicine and healthcare. A PubMed search returns more than 100,000 articles on these topics published between 2018 and 2022 alone. Notwithstanding several recent reviews in various subfields of AI and ML in medicine, we have yet to see a comprehensive review around the methods' use in longitudinal analysis and prediction of an individual patient's health status within a personalized disease pathway. This review seeks to fill that gap. After an overview of the AI and ML methods employed in this field and of specific medical applications of models of this type, the review discusses the strengths and limitations of current studies and looks ahead to future strands of research in this field. We aim to enable interested readers to gain a detailed impression of the research currently available and accordingly plan future work around predictive models for deterioration in health status. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 25 is June 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
- Research Article
- 10.30977/veit.2226-9266.2019.15.0.17
- Jun 2, 2019
- Vehicle and Electronics. Innovative Technologies
Problem. In this paper the problems and risks of introducing the provisions of artificial intelligence (AI) into the civilization of humanity are considered. Also the stages of the development of artificial intelligence from the game of checkers and chess through machine learning to deep learning (from 1950 to the present) are considered. Goal. The aim of the work is to review and evaluate the features of machine learning, including deep learning, since these methods of artificial intelligence most actively develop and most fully characterize it. Methodology . Methods of machine learning with and without a teacher, problems of machine learning and a family of algorithms for solving them are considered. Results. It is shown that the current state of development of artificial intelligence in terms of the number of equivalent to neurons, which is used in this case, corresponds to the level of a mouse. Mankind has several decades left to prepare for the ubiquitous spread of robots with artificial intelligence. The difference between a regular program and machine learning is shown. The analysis of the features of machine learning under various schemes has been carried out. Examples of the learning process of the algorithm, types of machine learning, classification of tasks and algorithms are given. The distinction between the problems and the family of algorithms is shown. Comparison of different machine learning algorithms is presented. The scope of machine learning is defined. Examples of the use of Google’s cloud machine learning services are given. It is concluded that instead of creating a program manually using a special set of commands, the algorithm is prepared using a large amount of data. The examples of the use of artificial intelligence in business processes, such as manufacturing and, in particular, engineering, are provided. Originality . The dangers of introducing artificial intelligence are formulated. The areas of applicability of artificial intelligence and machine learning, health and education, preferred for relative safety reasons, are proposed. Practical value. The attention of specialists is drawn to the features of artificial intelligence, which may be important in various areas of human life and activity. Key words: machine learning; artificial intelligence; Industry 4.0; deep learning, logistics
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.