The integration of slow-moving landslide features, MT-InSAR data, damage survey results and deep learning algorithms for building vulnerability zoning and forecast

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The integration of slow-moving landslide features, MT-InSAR data, damage survey results and deep learning algorithms for building vulnerability zoning and forecast

Similar Papers
  • Conference Article
  • Cite Count Icon 3
  • 10.1109/ictai50040.2020.00012
Multi-Agent Feature Learning and Integration for Mixed Cooperative and Competitive Environment
  • Nov 1, 2020
  • Yaowen Zhang + 5 more

At present, most of the centralized training with decentralized execution (CTDE) multi-agent reinforcement learning (MARL) algorithms have good results in the research of homogeneous scenarios. Heterogeneous multi-agent scenarios with different roles, cooperation modeling and credit assignment problems lead difficulty to learn effective collective strategies. In this paper, we propose a method of feature learning and feature integration about cooperation. Specifically, in the aspect of feature learning, through graph attention network, the relationship between agents is simplified to graph adjacency matrix representation, so that their feature vectors have relationship attributes. At the same time, for feature integration, we use batch normalization (BN) method to concatenate trained feature. We expect that agent relations can be modeled by end-to-end design. Meanwhile, attention mechanism can enhance the communication between interrelated agents. Through the experiments, our method has a significant result on improving the cooperative-competitive scenario of heterogeneous multi-agent. Moreover, we can visualize the output to analyze the reasonable collaborative and emphases attack policy.

  • Research Article
  • Cite Count Icon 3
  • 10.21271/zjpas.34.2.3
Comprehensive Study for Breast Cancer Using Deep Learning and Traditional Machine Learning
  • Apr 12, 2022
  • ZANCO JOURNAL OF PURE AND APPLIED SCIENCES

Comprehensive Study for Breast Cancer Using Deep Learning and Traditional Machine Learning

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 15
  • 10.1371/journal.pcbi.1011428
Structure-based prediction of nucleic acid binding residues by merging deep learning- and template-based approaches
  • Sep 6, 2023
  • PLOS Computational Biology
  • Zheng Jiang + 2 more

Accurate prediction of nucleic binding residues is essential for the understanding of transcription and translation processes. Integration of feature- and template-based strategies could improve the prediction of these key residues in proteins. Nevertheless, traditional hybrid algorithms have been surpassed by recently developed deep learning-based methods, and the possibility of integrating deep learning- and template-based approaches to improve performance remains to be explored. To address these issues, we developed a novel structure-based integrative algorithm called NABind that can accurately predict DNA- and RNA-binding residues. A deep learning module was built based on the diversified sequence and structural descriptors and edge aggregated graph attention networks, while a template module was constructed by transforming the alignments between the query and its multiple templates into features for supervised learning. Furthermore, the stacking strategy was adopted to integrate the above two modules for improving prediction performance. Finally, a post-processing module dependent on the random walk algorithm was proposed to further correct the integrative predictions. Extensive evaluations indicated that our approach could not only achieve excellent performance on both native and predicted structures but also outperformed existing hybrid algorithms and recent deep learning methods. The NABind server is available at http://liulab.hzau.edu.cn/NABind/.

  • Book Chapter
  • 10.1007/978-3-540-28648-6_77
Feedback Selective Visual Attention Model Based on Feature Integration Theory
  • Jan 1, 2004
  • Lianwei Zhao + 1 more

In this paper the visual processing architecture is assumed to be hierarchical in structure with units within this network receiving both feed-forward and feedback connections. We propose a neural computational model of visual system, which is based on the hierarchical structure of feedback selectiveness of visual attention information and feature integration theory. The proposed model consists of three stages. Visual image input is first decomposed into a set of topographic feature maps in a massively parallel method at the saliency stage. The feature integration stage is based on the feature integration theory, which is a representative theory for explaining all phenomena occurring in visual system as a consistent process. At last stage through feedback selection, the saliency stimulus is localized in each feature map. We carried out computer simulation and conformed that the proposed model is feasible and effective.KeywordsVisual AttentionPosterior Parietal CortexFeedback ConnectionRepresentative TheoryBinding ProblemThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

  • Dissertation
  • 10.31390/gradschool_dissertations.2428
Probabilistic and Deep Learning Algorithms for the Analysis of Imagery Data
  • Jun 10, 2022
  • Saikat Basu

Accurate object classification is a challenging problem for various low to high resolution imagery data. This applies to both natural as well as synthetic image datasets. However, each object recognition dataset poses its own distinct set of domain-specific problems. In order to address these issues, we need to devise intelligent learning algorithms which require a deep understanding and careful analysis of the feature space. In this thesis, we introduce three new learning frameworks for the analysis of both airborne images (NAIP dataset) and handwritten digit datasets without and with noise (MNIST and n-MNIST respectively). First, we propose a probabilistic framework for the analysis of the NAIP dataset which includes (1) an unsupervised segmentation module based on the Statistical Region Merging algorithm, (2) a feature extraction module that extracts a set of standard hand-crafted texture features from the images, (3) a supervised classification algorithm based on Feedforward Backpropagation Neural Networks, and (4) a structured prediction framework using Conditional Random Fields that integrates the results of the segmentation and classification modules into a single composite model to generate the final class labels. Next, we introduce two new datasets SAT-4 and SAT-6 sampled from the NAIP imagery and use them to evaluate a multitude of Deep Learning algorithms including Deep Belief Networks (DBN), Convolutional Neural Networks (CNN) and Stacked Autoencoders (SAE) for generating class labels. Finally, we propose a learning framework by integrating hand-crafted texture features with a DBN. A DBN uses an unsupervised pre-training phase to perform initialization of the parameters of a Feedforward Backpropagation Neural Network to a global error basin which can then be improved using a round of supervised fine-tuning using Feedforward Backpropagation Neural Networks. These networks can subsequently be used for classification. In the following discussion, we show that the integration of hand-crafted features with DBN shows significant improvement in performance as compared to traditional DBN models which take raw image pixels as input. We also investigate why this integration proves to be particularly useful for aerial datasets using a statistical analysis based on Distribution Separability Criterion. Then we introduce a new dataset called noisy-MNIST (n-MNIST) by adding (1) additive white gaussian noise (AWGN), (2) motion blur and (3) Reduced contrast and AWGN to the MNIST dataset and present a learning algorithm by combining probabilistic quadtrees and Deep Belief Networks. This dynamic integration of the Deep Belief Network with the probabilistic quadtrees provide significant improvement over traditional DBN

  • Research Article
  • Cite Count Icon 54
  • 10.1007/s13202-021-01087-4
Prediction performance advantages of deep machine learning algorithms for two-phase flow rates through wellhead chokes
  • Feb 23, 2021
  • Journal of Petroleum Exploration and Production
  • Hossein Shojaei Barjouei + 6 more

Two-phase flow rate estimation of liquid and gas flow through wellhead chokes is essential for determining and monitoring production performance from oil and gas reservoirs at specific well locations. Liquid flow rate (QL) tends to be nonlinearly related to these influencing variables, making empirical correlations unreliable for predictions applied to different reservoir conditions and favoring machine learning (ML) algorithms for that purpose. Recent advances in deep learning (DL) algorithms make them useful for predicting wellhead choke flow rates for large field datasets and suitable for wider application once trained. DL has not previously been applied to predict QL from a large oil field. In this study, 7245 multi-well data records from Sorush oil field are used to compare the QL prediction performance of traditional empirical, ML and DL algorithms based on four influencing variables: choke size (D64), wellhead pressure (Pwh), oil specific gravity (γo) and gas–liquid ratio (GLR). The prevailing flow regime for the wells evaluated is critical flow. The DL algorithm substantially outperforms the other algorithms considered in terms of QL prediction accuracy. The DL algorithm predicts QL for the testing subset with a root-mean-squared error (RMSE) of 196 STB/day and coefficient of determination (R2) of 0.9969 for Sorush dataset. The QL prediction accuracy of the models evaluated for this dataset can be arranged in the descending order: DL > DT > RF > ANN > SVR > Pilehvari > Baxendell > Ros > Glbert > Achong. Analysis reveals that input variable GLR has the greatest, whereas input variable D64 has the least relative influence on dependent variable QL.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 77
  • 10.1155/2022/9023719
Intrusion Detection System for Industrial Internet of Things Based on Deep Reinforcement Learning
  • Jan 1, 2022
  • Wireless Communications and Mobile Computing
  • Sumegh Tharewal + 5 more

The Industrial Internet of Things has grown significantly in recent years. While implementing industrial digitalization, automation, and intelligence introduced a slew of cyber risks, the complex and varied industrial Internet of Things environment provided a new attack surface for network attackers. As a result, conventional intrusion detection technology cannot satisfy the network threat discovery requirements in today’s Industrial Internet of Things environment. In this research, the authors have used reinforcement learning rather than supervised and unsupervised learning, because it could very well improve the decision‐making ability of the learning process by integrating abstract thinking of complete understanding, using deep knowledge to perform simple and nonlinear transformations of large‐scale original input data into higher‐level abstract expressions, and using learning algorithm or learning based on feedback signals, in the lack of guiding knowledge, which is based on the trial‐and‐error learning model, from the interaction with the environment to find the best good solution. In this respect, this article presents a near‐end strategy optimization method for the Industrial Internet of Things intrusion detection system based on a deep reinforcement learning algorithm. This method combines deep learning’s observation capability with reinforcement learning’s decision‐making capability to enable efficient detection of different kinds of cyberassaults on the Industrial Internet of Things. In this manuscript, the DRL‐IDS intrusion detection system is built on a feature selection method based on LightGBM, which efficiently selects the most attractive feature set from industrial Internet of Things data; when paired with deep learning algorithms, it effectively detects intrusions. To begin, the application is based on GBM’s feature selection algorithm, which extracts the most compelling feature set from Industrial Internet of Things data; then, in conjunction with the deep learning algorithm, the hidden layer of the multilayer perception network is used as the shared network structure for the value network and strategic network in the PPO2 algorithm; and finally, the intrusion detection model is constructed using the PPO2 algorithm and ReLU (R). Numerous tests conducted on a publicly available data set of the Industrial Internet of Things demonstrate that the suggested intrusion detection system detects 99 percent of different kinds of network assaults on the Industrial Internet of Things. Additionally, the accuracy rate is 0.9%. The accuracy, precision, recall rate, F1 score, and other performance indicators are superior to those of the existing intrusion detection system, which is based on deep learning models such as LSTM, CNN, and RNN, as well as deep reinforcement learning models such as DDQN and DQN.

  • Research Article
  • Cite Count Icon 5
  • 10.1155/2022/3452176
Deep Learning Algorithm-Based Ultrasound Image Information in Diagnosis and Treatment of Pernicious Placenta Previa.
  • Jun 6, 2022
  • Computational and Mathematical Methods in Medicine
  • Xiao Yang + 2 more

This study was to explore the value of the deep dictionary learning algorithm in constructing a B ultrasound scoring system and exploring its application in the clinical diagnosis and treatment of pernicious placenta previa (PPP). 60 patients with PPP were divided into a low-risk group (severe, implantable) and high-risk group (adhesive, penetrating) according to their clinical characteristics, B ultrasound imaging characteristics, and postpartum pathological examination results. Under PPP ultrasonic image information using the deep learning algorithm, the B ultrasound image diagnostic scoring system was established to predict the depth of various types of placenta accreta. The results showed that the cut-off values of severe, implantable, adhesive, and penetrating types were <2.3, 2.3-6.5, 6.5-9, and ≥9 points, respectively; there were significant differences in the termination of pregnancy and neonatal birth weight between the two groups (P < 0.05); the positive predictive value, negative predictive value, and false positive rate of ultrasound images based on the deep dictionary learning algorithm for PPP were 95.33%, 94.89%, and 3.56%, respectively. Thus, the ultrasound image diagnostic scoring system based on the deep learning algorithm has an important predictive role for PPP, which can provide a more targeted diagnosis and treatment plan for patients in clinical practice and improve the prediction and treatment efficiency.

  • Research Article
  • Cite Count Icon 2
  • 10.15678/znuek.2018.0978.0603
Predicting Bankruptcy at Polish Companies: A Comparison of Selected Machine Learning and Deep Learning Algorithms
  • Jan 1, 2018
  • Zeszyty Naukowe Uniwersytetu Ekonomicznego w Krakowie
  • Joanna Wyrobek

Insolvency prediction is one of the crucial abilities in corporate finance and financial management. It is critical in accounts receivable management, capital budgeting decisions, financial analysis, capital structure management, going concern assessment and co-operation with other companies. The purpose of this paper is to compare the efficiency of selected deep learning and machine learning algorithms trained on a representative sample of Polish companies for the period 2008–2017. In particular, the paper tested the following popular machine learning algorithms: discriminant analysis (DA), logit (L), support vector machines (SVM), random forest (RF), gradient boosting decision trees (GB), neural network with one hidden layer (NN), convolutional neural network (CNN), and naïve Bayes (NB). The research hypotheses evaluated in the paper state that if one has access to a large sample of companies, the most accurate algorithm (first choice) in bankruptcy prediction will be gradient boosting decision trees (H1), random forest (H2) and neural networks (H3) (deep learning) algorithms. The initial hypotheses were formulated based on the practitioners’ opinions regarding the usefulness of various machine learning and artificial intelligence algorithms in bankruptcy prediction. As the results of the research suggest, both deep learning and machine learning algorithms proved to have very comparable efficiency. The new factor introduced in the paper was that the training of the models was carried out on a representative sample of companies (for years 2008–2013) and also the testing phase used a significant number of bankrupt and active companies (validation included a completely different set of companies than those used in the training phase: data were taken from a different time period, 2014–2017, and companies in both sets were also completely different).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 58
  • 10.1007/s13202-022-01531-z
Predicting shear wave velocity from conventional well logs with deep and hybrid machine learning algorithms
  • Jul 11, 2022
  • Journal of Petroleum Exploration and Production Technology
  • Meysam Rajabi + 9 more

Shear wave velocity (VS) data from sedimentary rock sequences is a prerequisite for implementing most mathematical models of petroleum engineering geomechanics. Extracting such data by analyzing finite reservoir rock cores is very costly and limited. The high cost of sonic dipole advanced wellbore logging service and its implementation in a few wells of a field has placed many limitations on geomechanical modeling. On the other hand, shear wave velocity VS tends to be nonlinearly related to many of its influencing variables, making empirical correlations unreliable for its prediction. Hybrid machine learning (HML) algorithms are well suited to improving predictions of such variables. Recent advances in deep learning (DL) algorithms suggest that they too should be useful for predicting VS for large gas and oil field datasets but this has yet to be verified. In this study, 6622 data records from two wells in the giant Iranian Marun oil field (MN#163 and MN#225) are used to train HML and DL algorithms. 2072 independent data records from another well (MN#179) are used to verify the VS prediction performance based on eight well-log-derived influencing variables. Input variables are standard full-set recorded parameters in conventional oil and gas well logging data available in most older wells. DL predicts VS for the supervised validation subset with a root mean squared error (RMSE) of 0.055 km/s and coefficient of determination (R2) of 0.9729. It achieves similar prediction accuracy when applied to an unseen dataset. By comparing the VS prediction performance results, it is apparent that the DL convolutional neural network model slightly outperforms the HML algorithms tested. Both DL and HLM models substantially outperform five commonly used empirical relationships for calculating VS from Vp relationships when applied to the Marun Field dataset. Concerns regarding the model's integrity and reproducibility were also addressed by evaluating it on data from another well in the field. The findings of this study can lead to the development of knowledge of production patterns and sustainability of oil reservoirs and the prevention of enormous damage related to geomechanics through a better understanding of wellbore instability and casing collapse problems.Graphical abstract

  • Research Article
  • Cite Count Icon 1
  • 10.55524/ijircst.2022.10.5.20
Comparative Study and Utilization of Best Deep Learning Algorithms for the Image Processing
  • Sep 25, 2022
  • International Journal of Innovative Research in Computer Science &amp; Technology
  • Dr Kanakam Siva Rama Prasad + 2 more

Deep learning has gained immense popularity in scientific computing, and its algorithms are widely used in complex problem-solving industries. Every deep learning algorithm use different types of neural networks to perform indented tasks. Deep learning (DL) algorithms have emerged from different machine learning and soft computing methodologies. Since then, a number of deep learning (DL) algorithms have been recently introduced in the scientific community and applied in various application fields. Today, the use of DLs has become indispensable due to their intelligence, effective learning, accuracy and reliability in model creation. However, a comprehensive list of DL algorithms has not yet been presented in the scientific literature. This article lists the most popular DL algorithms and their application areas. Deep learning uses ANN artificial neural networks to perform convoluted calculations on huge amounts of data. It is a type of machine learning based on the structure and function of the human brain. Deep learning algorithms train machines by learning from examples. Industries such as healthcare, e-commerce, entertainment and advertising often use deep learning.

  • Research Article
  • Cite Count Icon 14
  • 10.3390/s22135006
A Novel Method for Improved Network Traffic Prediction Using Enhanced Deep Reinforcement Learning Algorithm
  • Jul 2, 2022
  • Sensors (Basel, Switzerland)
  • Nagaiah Mohanan Balamurugan + 3 more

Network data traffic is increasing with expanded networks for various applications, with text, image, audio, and video for inevitable needs. Network traffic pattern identification and analysis of traffic of data content are essential for different needs and different scenarios. Many approaches have been followed, both before and after the introduction of machine and deep learning algorithms as intelligence computation. The network traffic analysis is the process of incarcerating traffic of a network and observing it deeply to predict what the manifestation in traffic of the network is. To enhance the quality of service (QoS) of a network, it is important to estimate the network traffic and analyze its accuracy and precision, as well as the false positive and negative rates, with suitable algorithms. This proposed work is coining a new method using an enhanced deep reinforcement learning (EDRL) algorithm to improve network traffic analysis and prediction. The importance of this proposed work is to contribute towards intelligence-based network traffic prediction and solve network management issues. An experiment was carried out to check the accuracy and precision, as well as the false positive and negative parameters with EDRL. Also, convolutional neural network (CNN) machines and deep learning algorithms have been used to predict the different types of network traffic, which are labeled text-based, video-based, and unencrypted and encrypted data traffic. The EDRL algorithm has outperformed with mean Accuracy (97.20%), mean Precision (97.343%), mean false positive (2.657%) and mean false negative (2.527%) than the CNN algorithm.

  • Research Article
  • 10.1186/s12911-025-03056-x
A deep learning model for predicting systemic lupus erythematosus-associated epitopes
  • Jul 1, 2025
  • BMC Medical Informatics and Decision Making
  • Jiale He + 2 more

BackgroundThe accurate prediction of epitopes associated with Systemic Lupus Erythematosus (SLE) plays a vital role in advancing our understanding of autoimmune pathogenesis and in designing effective immunotherapeutics. Traditional bioinformatics methods often struggle to capture the intricate sequence patterns and high-dimensional signals characteristic of epitope data. Deep learning presents a compelling alternative, with its ability to perform automatic feature learning and model complex dependencies inherent in biological sequences. This study proposes a hybrid deep learning architecture that synergistically integrates handcrafted biochemical features with data-driven deep sequence modeling to improve the identification of SLE-associated epitopes.MethodsThe framework comprises six interconnected components: (1) handcrafted feature extraction encoding biochemical and physicochemical attributes; (2) an embedding layer for dense sequence representation; (3) a Convolutional Neural Network (CNN) branch that captures local patterns from handcrafted features; (4) a Long Short-Term Memory branch for learning temporal dependencies in sequence data; (5) a scaled dot-product attention-based fusion module that integrates complementary information from both branches; and (6) a Multi-Layer Perceptron for final classification. Model evaluation employed metrics such as Accuracy, Precision, Recall, F1-score, and the area under the receiver operating characteristic curve (ROCAUC).ResultsThe hybrid model outperformed both baseline machine learning algorithms and ablated versions of itself. It achieved a ROCAUC of 0.9506 and an F1-score of 0.8333 on the SLE epitope prediction task. Notably, ablation studies revealed that the CNN component had the most substantial influence on performance, while the custom fusion mechanism yielded better integration of features than conventional strategies. These findings underscore the model’s robustness and capacity to generalize across complex epitope prediction tasks.ConclusionThis work presents an interpretable, biologically informed deep learning approach for predicting SLE-associated epitopes. By merging domain-specific handcrafted features with dynamic deep learning representations, the model not only enhances predictive accuracy but also provides meaningful biological insights. The framework holds promise for broader applications in immunoinformatics and autoimmune disease research.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-981-15-8760-3_13
Classification of Liver Cancer Images Based on Deep Learning
  • Jan 1, 2020
  • Hui Ye + 3 more

With the rapid development of deep Learning, research into Deep Learning is being increasingly applied to the field of medical imaging. Liver cancer, which has one of the highest rates of morbidity and mortality in the world, is a great threat to people’s health. This study aims to apply Convolutional Neural Networks in the grade classification of liver cancer images. DCE- MRI and DWI, two modes of hepatocellular carcinoma images, are originally used separately to grade liver cancers. We combine these two image modes to improve the prediction accuracy. The study finds that the features of the two modes can be complementary, and can improve the grading classification of liver cancer. From comparing the two methods of traditional Machine Learning and Deep Learning, the study demonstrates that the grading accuracy by Machine Learning from the integration of features is 87.8, while the accuracy rate from Deep Learning reaches 90.5. The improvement in grading accuracy is due to Deep Learning can extract the appropriate features. In addition, the presence of micro vascular invasion is an important factor for the recurrence of liver cancer after surgery. The experiment also uses Deep Learning to predict micro vascular invasion. The accuracy of the ADC map prediction reached 69.2, it demonstrates that liver cancer images can also predict micro vascular invasion to a certain extent.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.1007/s00521-023-08341-2
A framework for classifying breast cancer based on deep features integration and selection
  • Feb 17, 2023
  • Neural Computing and Applications
  • Abdallah M Hassan + 2 more

Deep convolutional neural networks (DCNNs) are one of the most advanced techniques for classifying images in a range of applications. One of the most prevalent cancers that cause death in women is breast cancer. For survival rates to increase, early detection and treatment of breast cancer is essential. Deep learning (DL) can help radiologists diagnose and classify breast cancer lesions. This paper proposes a computer-aided system based on DL techniques for automatically classify breast cancer tumors in histopathological images. There are nine DCNN architectures used in this work. Four schemes are performed in the proposed framework to find the best approach. The first scheme consists of pre-trained DCNNs based on the transfer learning concept. The second scheme performs feature extraction of the DCNN architectures and uses a support vector machine (SVM) classifier for evaluation. The third one performs feature integration to show how the integrated deep features may enhance the SVM classifiers' accuracy. Finally, in the fourth scheme, the Chi-square (χ2) feature selection method is applied to reduce the large feature size in the feature integration step. The results of the proposed system present a promising performance for breast cancer classification with an accuracy of 99.24%. The system performance shows that the proposed tool is suitable to assist radiologists in diagnosing breast cancer tumors.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.