Prediction of the gradation stability of granular soils using machine learning techniques
Prediction of the gradation stability of granular soils using machine learning techniques
9442
- 10.1145/130385.130401
- Jul 1, 1992
90
- 10.1061/(asce)gt.1943-5606.0001343
- Jun 4, 2015
- Journal of Geotechnical and Geoenvironmental Engineering
124
- 10.1007/s12517-017-3167-x
- Sep 1, 2017
- Arabian Journal of Geosciences
38
- 10.1007/s11269-016-1547-8
- Nov 26, 2016
- Water Resources Management
279
- 10.1061/(asce)1090-0241(2008)134:1(57)
- Jan 1, 2008
- Journal of Geotechnical and Geoenvironmental Engineering
150
- 10.1061/(asce)0733-9410(1984)110:6(701)
- Jun 1, 1984
- Journal of Geotechnical Engineering
71
- 10.1016/j.tust.2015.11.024
- Jan 6, 2016
- Tunnelling and Underground Space Technology
739
- 10.1155/2013/425740
- Jan 1, 2013
- Mathematical Problems in Engineering
50
- 10.1139/t96-032
- Mar 25, 1996
- Canadian Geotechnical Journal
305
- 10.1007/s00366-015-0400-7
- Feb 18, 2015
- Engineering with Computers
- Research Article
31
- 10.1145/3552512
- Jan 15, 2024
- ACM Transactions on Asian and Low-Resource Language Information Processing
Epilepsy is one of the significant neurological disorders affecting nearly 65 million people worldwide. The repeated seizure is characterized as epilepsy. Different algorithms were proposed for efficient seizure detection using intracranial and surface EEG signals. In the last decade, various machine learning techniques based on seizure detection approaches were proposed. This paper discusses different machine learning and deep learning techniques for seizure detection using intracranial and surface EEG signals. A wide range of machine learning techniques such as support vector machine (SVM) classifiers, artificial neural network (ANN) classifier, and deep learning techniques such as a convolutional neural network (CNN) classifier, and long-short term memory (LSTM) network for seizure detection are compared in this paper. The effectiveness of time-domain features, frequency domain features, and time-frequency domain features are discussed along with different machine learning techniques. Along with EEG, other physiological signals such as electrocardiogram are used to enhance seizure detection accuracy which are discussed in this paper. In recent years deep learning techniques based on seizure detection have found good classification accuracy. In this paper, an LSTM deep learning-network-based approach is implemented for seizure detection and compared with state-of-the-art methods. The LSTM based approach achieved 96.5% accuracy in seizure-nonseizure EEG signal classification. Apart from analyzing the physiological signals, sentiment analysis also has potential to detect seizures. Impact Statement- This review paper gives a summary of different research work related to epileptic seizure detection using machine learning and deep learning techniques. Manual seizure detection is time consuming and requires expertise. So the artificial intelligence techniques such as machine learning and deep learning techniques are used for automatic seizure detection. Different physiological signals are used for seizure detection. Different researchers are working on developing automatic seizure detection using EEG, ECG, accelerometer, and sentiment analysis. There is a need for a review paper that can discuss previous techniques and give further research direction. We have discussed different techniques for seizure detection with an accuracy comparison table. It can help the researcher to get an overview of both surface and intracranial EEG-based seizure detection approaches. The new researcher can easily compare different models and decide the model they want to start working on. A deep learning model is discussed to give a practical application of seizure detection. Sentiment analysis is another dimension of seizure detection and summarizing it will give a new prospective to the reader.
- Research Article
20
- 10.1109/access.2021.3108073
- Jan 1, 2021
- IEEE Access
With the popularization of machine learning (ML) techniques and the increased chipset’s performance, the application of ML to pedestrian localization systems has received significant attention in the last years. Several survey papers have attempted to provide a state-of-the-art overview, but they usually limit their scope to a particular type of positioning system or technology. In addition, they are written from the point of view of ML techniques and their practice, not from the point of view of the localization system and the specific problems that ML techniques can help to solve. This article is intended to offer a comprehensive state-of-the-art survey of the ML techniques that have been adopted over the last ten years to improve the performance of pedestrian localization systems, addressing the applicability of ML techniques in this domain, along with the main localization strategies. It concludes by indicating the underlying open issues and challenges associated with the existing systems, and possible future directions in which ML techniques could improve the performance of pedestrian localization systems. Among other open issues, most previous authors have focused their attention on position estimation accuracy, which wastes the potential of ML techniques to improve other performance parameters (e.g., response time, computational complexity, robustness, scalability or energy efficiency). This study shows that there is a strong trend towards the application of supervised learning. Consequently, there are many potential research opportunities in the use of other learning types, such as unsupervised and reinforcement learning, to improve the performance of pedestrian localization systems.
- Research Article
1
- 10.23967/j.rimni.2022.09.001
- Jan 1, 2022
- Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería
During the pandemic caused by the Coronavirus (Covid-19), Machine Learning (ML) techniques can be used, among other alternatives, to detect the virus in its early stages, which would aid a fast recovery and help to ease the pressure on healthcare systems. In this study, we present a Systematic Literature Review (SLR) and a Bibliometric Analysis of ML technique applications in the Covid-19 pandemic, from January 2020 to June 2021, identifying possible unexplored gaps. In the SLR, the 117 most cited papers published during the period were analyzed and divided into four categories: 22 articles that analyzed the problem of the disease using ML techniques in an X-Ray (XR) analysis and Computed Tomography (CT) of the lungs of infected patients; 13 articles that studied the problem by addressing social network tools using ML techniques; 44 articles directly used ML techniques in forecasting problems; and 38 articles that applied ML techniques for general issues regarding the disease. The gap identified in the literature had to do with the use of ML techniques when analyzing the relationship between the human genotype and susceptibility to Covid-19 or the severity of the infection, a subject that has begun to be explored in the scientific community.
- Book Chapter
2
- 10.1007/978-3-030-74761-9_17
- Jul 28, 2021
As per the World Health Organization (WHO), Coronaviruses represents a huge virus family that creates diseases in humans/animals. The newly discovered coronavirus is known as Covid-19 (Cov-19). In December 2019, this virus broke out in Wuhan, China causing massive havoc worldwide. The design, development, theory and application of standards related to computation form the Computational Intelligence (CI) methods. Conventionally, the 3 key components of CI are the Artificial Neural Networks (ANN), Fuzzy System (FS), & Computation related to Evolution (EC). Lately, techniques like chaotic systems, support vector machines (SVM), etc. have been included into the CI techniques. Machine Learning (ML) enables systems to automatically learn without being programmed explicitly. Deep Learning (DL) represents a family of ML techniques based on ANN. A great potential has been observed while applying CI, ML, and DL techniques for predicting Cov-19. In this regard, the key objective of this chapter is to present an extensive review to the readers on how CI, ML, and DL techniques can be utilized to effectively predict Cov-19. The chapter deals with the review of the different CI, ML, and DL techniques such as ANN, FS, and EC that have been applied for Cov-19 prediction. The application and suitability of CI, ML, and DL techniques for screening and treating the patients, tracing the contacts along with Cov-19 forecasting is discussed in detail. A discussion of why certain CI, ML, and DL techniques are useful for the Cov-19 prediction is also presented.
- Research Article
59
- 10.1097/acm.0000000000002414
- Mar 1, 2019
- Academic Medicine
To identify the different machine learning (ML) techniques that have been applied to automate physician competence assessment and evaluate how these techniques can be used to assess different competence domains in several medical specialties. In May 2017, MEDLINE, EMBASE, PsycINFO, Web of Science, ACM Digital Library, IEEE Xplore Digital Library, PROSPERO, and Cochrane Database of Systematic Reviews were searched for articles published from inception to April 30, 2017. Studies were included if they applied at least one ML technique to assess medical students', residents', fellows', or attending physicians' competence. Information on sample size, participants, study setting and design, medical specialty, ML techniques, competence domains, outcomes, and methodological quality was extracted. MERSQI was used to evaluate quality, and a qualitative narrative synthesis of the medical specialties, ML techniques, and competence domains was conducted. Of 4,953 initial articles, 69 met inclusion criteria. General surgery (24; 34.8%) and radiology (15; 21.7%) were the most studied specialties; natural language processing (24; 34.8%), support vector machine (15; 21.7%), and hidden Markov models (14; 20.3%) were the ML techniques most often applied; and patient care (63; 91.3%) and medical knowledge (45; 65.2%) were the most assessed competence domains. A growing number of studies have attempted to apply ML techniques to physician competence assessment. Although many studies have investigated the feasibility of certain techniques, more validation research is needed. The use of ML techniques may have the potential to integrate and analyze pragmatic information that could be used in real-time assessments and interventions.
- Supplementary Content
29
- 10.3390/cancers13102469
- May 19, 2021
- Cancers
Simple SummaryNon-invasive imaging modalities are commonly used in clinical practice. Recently, the application of machine learning (ML) techniques has provided a new scope for more detailed imaging analysis in esophageal cancer (EC) patients. Our review aims to explore the recent advances and future perspective of the ML technique in the disease management of EC patients. ML-based investigations can be used for diagnosis, treatment response evaluation, prognostication, and investigation of biological heterogeneity. The key results from the literature have demonstrated the potential of ML techniques, such as radiomic techniques and deep learning networks, to improve the decision-making process for EC patients in clinical practice. Recommendations have been made to improve study design and future applicability.Esophageal cancer (EC) is of public health significance as one of the leading causes of cancer death worldwide. Accurate staging, treatment planning and prognostication in EC patients are of vital importance. Recent advances in machine learning (ML) techniques demonstrate their potential to provide novel quantitative imaging markers in medical imaging. Radiomics approaches that could quantify medical images into high-dimensional data have been shown to improve the imaging-based classification system in characterizing the heterogeneity of primary tumors and lymph nodes in EC patients. In this review, we aim to provide a comprehensive summary of the evidence of the most recent developments in ML application in imaging pertinent to EC patient care. According to the published results, ML models evaluating treatment response and lymph node metastasis achieve reliable predictions, ranging from acceptable to outstanding in their validation groups. Patients stratified by ML models in different risk groups have a significant or borderline significant difference in survival outcomes. Prospective large multi-center studies are suggested to improve the generalizability of ML techniques with standardized imaging protocols and harmonization between different centers.
- Research Article
96
- 10.1145/2786763.2694358
- Mar 14, 2015
- ACM SIGARCH Computer Architecture News
Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique.In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
- Research Article
29
- 10.1145/2775054.2694358
- Mar 14, 2015
- ACM SIGPLAN Notices
Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique.In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
- Research Article
14
- 10.1109/mis.2022.3152946
- Jan 1, 2022
- IEEE Intelligent Systems
Machine learning (ML) techniques have numerous applications in many fields, including healthcare, medicine, finance, marketing, and cyber security. For example, ML techniques are being applied to determine whether to give a loan to a customer or whether the computing system has been attacked. However, the ML techniques themselves may be subject to attacks and may discriminate when determining who should get the loan. Therefore, the ML techniques have to be secure, ensure privacy of the individuals, incorporate fairness and be accurate. Such collection of ML techniques has come to be known as trustworthy machine learning (trustworthy ML). This article describes an architecture to support scalable trustworthy ML and describes the features that have to be incorporated into the ML techniques to ensure that they are trustworthy.
- Research Article
1
- 10.1002/ima.22905
- May 11, 2023
- International Journal of Imaging Systems and Technology
COVID‐19 has affected more than 760 million people all over the world, as per the latest record of the WHO. The rapid proliferation of COVID‐19 patients not only created a health emergency but also led to an economic crisis. An early and accurate diagnosis of COVID‐19 can help in combating this deadly virus. In line with this, researchers have proposed several machine learning (ML) and deep learning (DL) techniques for detecting COVID‐19 since 2020. This article presents currently available manual diagnosis methods along with their limitations. It also provides an extensive survey of ML and DL techniques that can support medical professionals in the precise diagnosis of COVID‐19. ML methods, namely K‐nearest neighbor, support vector machine (SVM), artificial neural network, decision tree, naive bayes, and DL methods, viz. deep neural network, convolutional neural network (CNN), region‐based convolutional neural network, and long short‐term memories, are explored. It also provides details of the latest COVID‐19 open‐source datasets, consisting of x‐ray and computed tomography scan images. A comparative analysis of ML and DL techniques developed for COVID‐19 detection in terms of methodology, datasets, sample size, type of classification, performance, and limitations is also done. It has been found that SVM is the most frequently used ML technique, while CNN is the most commonly used DL technique for COVID‐19 detection. The challenges of an existing dataset have been identified, including size and quality of datasets, lack of labeled datasets, severity level, data imbalance, and privacy concerns. It is recommended that there is a need to establish a benchmark dataset that overcomes these challenges to enhance the effectiveness of ML and DL techniques. Further, hurdles in implementing ML and DL techniques in real‐time clinical settings have also been highlighted. In addition, the motivation noticed from the existing methods has been considered for extending the research with an optimized DL model, which attained improved performance using statistical and deep features. The optimized deep model performs better than 90% based on efficient features and proper classifier tuning.
- Research Article
- 10.1016/j.seizure.2025.01.021
- Mar 1, 2025
- Seizure
Utilizing machine learning techniques for EEG assessment in the diagnosis of epileptic seizures in the brain: A systematic review and meta-analysis.
- Research Article
15
- 10.3389/feart.2021.701837
- Sep 15, 2021
- Frontiers in Earth Science
Landslide disaster risk reduction necessitates the investigation of different geotechnical causal factors for slope failures. Machine learning (ML) techniques have been proposed to study causal factors across many application areas. However, the development of ensemble ML techniques for identifying the geotechnical causal factors for slope failures and their subsequent prediction has lacked in literature. The primary goal of this research is to develop and evaluate novel feature selection methods for identifying causal factors for slope failures and assess the potential of ensemble and individual ML techniques for slope failure prediction. Twenty-one geotechnical causal factors were obtained from 60 sites (both landslide and non-landslide) spread across a landslide-prone area in Mandi, India. Relevant causal factors were evaluated by developing a novel ensemble feature selection method that involved an average of different individual feature selection methods like correlation, information-gain, gain-ratio, OneR, and F-ratio. Furthermore, different ensemble ML techniques (Random Forest (RF), AdaBoost (AB), Bagging, Stacking, and Voting) and individual ML techniques (Bayesian network (BN), decision tree (DT), multilayer perceptron (MLP), and support vector machine (SVM)) were calibrated to 70% of the locations and tested on 30% of the sites. The ensemble feature selection method yielded six major contributing parameters to slope failures: relative compaction, porosity, saturated permeability, slope angle, angle of the internal friction, and in-situ moisture content. Furthermore, the ensemble RF and AB techniques performed the best compared to other ensemble and individual ML techniques on test data. The present study discusses the implications of different causal factors for slope failure prediction.
- Book Chapter
- 10.58532/v2bs16ch1
- Nov 30, 2023
Machine learning (ML) techniques play a major role in engineering world. In this sequence the manufacturing industries also utilize the ML techniques for the various applications. Among them material properties prediction or forecasting is a noticeable process of manufactures using ML techniques. The ML techniques are broadly categorized into three types such as supervised; semi supervised and unsupervised learning techniques. The learning approach can be preferred based on the problem to solve using ML technique. In this chapter, the supervised learning for the prediction of material properties is presented. Initially the properties of materials and the necessity of ML technique for the prediction of material properties is described. Then four different supervised learning such as Random Forest (RF), Naive Bayesian (NB), Support Vector Machine (SVM), and Artificial Neural Network (ANN) are described for the prediction of material properties. Finally, the performance of these four techniques is evaluated based on accuracy. The performance analysis shows that the ANN with accuracy of 98% provides better than other techniques
- Conference Article
5
- 10.1145/3422392.3422427
- Oct 21, 2020
Code smells are considered symptoms of poor implementation choices, which may hamper the software maintainability. Hence, code smells should be detected as early as possible to avoid software quality degradation. Unfortunately, detecting code smells is not a trivial task. Some preliminary studies investigated and concluded that machine learning (ML) techniques are a promising way to better support smell detection. However, these techniques are hard to be customized to promote an early and accurate detection of specific smell types. Yet, ML techniques usually require numerous code examples to be trained (composing a relevant dataset) in order to achieve satisfactory accuracy. Unfortunately, such a dependency on a large validated dataset is impractical and leads to late detection of code smells. Thus, a prevailing challenge is the early customized detection of code smells taking into account the typical limited training data. In this direction, this paper reports a study in which we collected code smells, from ten active projects, that were actually refactored by developers, differently from studies that rely on code smells inferred by researchers. These smells were used for evaluating the accuracy regarding early detection of code smells by using seven ML techniques. Once we take into account such smells that were considered as important by developers, the ML techniques are able to customize the detection in order to focus on smells observed as relevant in the investigated systems. The results showed that all the analyzed techniques are sensitive to the type of smell and obtained good results for the majority of them, especially JRip and Random Forest. We also observe that the ML techniques did not need a high number of examples to reach their best accuracy results. This finding implies that ML techniques can be successfully used for early detection of smells without depending on the curation of a large dataset.
- Research Article
2
- 10.1109/tcad.2019.2927523
- Jul 23, 2019
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
In recent years, machine learning (ML) techniques are proven to be powerful tools in various emerging applications. Traditionally, ML techniques are processed on general-purpose CPUs and GPUs, but their energy efficiencies are limited due to their excessive support for flexibility. As an efficient alternative to CPUs/GPUs, hardware accelerators are still limited as they often accommodate only a single ML technique (family). However, different problems may require different ML techniques, which implies that such accelerators may achieve poor learning accuracy or even be ineffective. In this paper, we present a polyvalent accelerator architecture integrated with multiple processing cores, called ParaML, which accommodates ten representative ML techniques, including $k$ -means, $k$ -nearest neighbors ( $k$ -NN), naive Bayes (NB), support vector machine (SVM), linear regression (LR), classification tree (CT), deep neural network (DNN), learning vector quantization (LVQ), parzen window (PW), and principal component analysis (PCA). Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, the single-core ParaML can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm2 and consumes 596 mW only, estimated by ICC and PrimeTime PX with post-synthesis netlist, respectively. Compared with the NVIDIA K20M GPU (28-nm process), the single-core ParaML (65-nm process) is $1.21\times $ faster, and can reduce the energy by $137.93\times $ . We also compare the single-core ParaML with other accelerators. Compared with PRINS, single-core ParaML achieves $72.09\times $ and $2.57\times $ energy benefit for $k$ -NN and $k$ -means, respectively, and speeds up each query in $k$ -NN by $44.76\times $ . Compared with EIE, the single-core ParaML achieves $5.02\times $ speedup and $4.97\times $ energy benefit with $11.62\times $ less area when evaluating with dense DNN. Compared with TPU, the single-core ParaML achieves $2.45\times $ better power efficiency (5647 Gop/W versus 2300 Gop/W) with $321.36\times $ less area. Compared to the single-core version, the 8-core ParaML will further improve the speedup up to $3.98\times $ with an area of 13.44 mm2 and a power of 2036 mW.
- Research Article
- 10.1007/s10035-025-01586-9
- Oct 27, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01582-z
- Oct 13, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01561-4
- Oct 13, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01576-x
- Oct 13, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01581-0
- Oct 6, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01579-8
- Oct 6, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01577-w
- Sep 29, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01567-y
- Sep 29, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01543-6
- Aug 4, 2025
- Granular Matter
- Research Article
- 10.1007/s10035-025-01568-x
- Aug 4, 2025
- Granular Matter
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.