Artificial intelligence in interdisciplinary life science and drug discovery research.
Artificial intelligence in interdisciplinary life science and drug discovery research.
- Research Article
3
- 10.21271/zjpas.34.2.3
- Apr 12, 2022
- ZANCO JOURNAL OF PURE AND APPLIED SCIENCES
Comprehensive Study for Breast Cancer Using Deep Learning and Traditional Machine Learning
- Research Article
965
- 10.1007/s11030-021-10217-3
- Jan 1, 2021
- Molecular Diversity
Drug designing and development is an important area of research for pharmaceutical companies and chemical scientists. However, low efficacy, off-target delivery, time consumption, and high cost impose a hurdle and challenges that impact drug design and discovery. Further, complex and big data from genomics, proteomics, microarray data, and clinical trials also impose an obstacle in the drug discovery pipeline. Artificial intelligence and machine learning technology play a crucial role in drug discovery and development. In other words, artificial neural networks and deep learning algorithms have modernized the area. Machine learning and deep learning algorithms have been implemented in several drug discovery processes such as peptide synthesis, structure-based virtual screening, ligand-based virtual screening, toxicity prediction, drug monitoring and release, pharmacophore modeling, quantitative structure–activity relationship, drug repositioning, polypharmacology, and physiochemical activity. Evidence from the past strengthens the implementation of artificial intelligence and deep learning in this field. Moreover, novel data mining, curation, and management techniques provided critical support to recently developed modeling algorithms. In summary, artificial intelligence and deep learning advancements provide an excellent opportunity for rational drug design and discovery process, which will eventually impact mankind.Graphic abstractThe primary concern associated with drug design and development is time consumption and production cost. Further, inefficiency, inaccurate target delivery, and inappropriate dosage are other hurdles that inhibit the process of drug delivery and development. With advancements in technology, computer-aided drug design integrating artificial intelligence algorithms can eliminate the challenges and hurdles of traditional drug design and development. Artificial intelligence is referred to as superset comprising machine learning, whereas machine learning comprises supervised learning, unsupervised learning, and reinforcement learning. Further, deep learning, a subset of machine learning, has been extensively implemented in drug design and development. The artificial neural network, deep neural network, support vector machines, classification and regression, generative adversarial networks, symbolic learning, and meta-learning are examples of the algorithms applied to the drug design and discovery process. Artificial intelligence has been applied to different areas of drug design and development process, such as from peptide synthesis to molecule design, virtual screening to molecular docking, quantitative structure–activity relationship to drug repositioning, protein misfolding to protein–protein interactions, and molecular pathway identification to polypharmacology. Artificial intelligence principles have been applied to the classification of active and inactive, monitoring drug release, pre-clinical and clinical development, primary and secondary drug screening, biomarker development, pharmaceutical manufacturing, bioactivity identification and physiochemical properties, prediction of toxicity, and identification of mode of action.
- Discussion
6
- 10.1016/j.ejmp.2021.05.008
- Mar 1, 2021
- Physica Medica
Focus issue: Artificial intelligence in medical physics.
- Research Article
9
- 10.1111/ajo.13661
- Apr 1, 2023
- Australian and New Zealand Journal of Obstetrics and Gynaecology
Artificial intelligence (AI) is the simulation of human intelligence in machines that are programmed to think and learn like humans. AI has the potential to revolutionise the way that healthcare professionals diagnose, treat, and manage conditions affecting the female reproductive system. Machine learning (ML) is a subset of AI which deals with the development of algorithms and statistical models that enable computers to learn from and make predictions or decisions without being explicitly programmed to do so. Deep learning (DL) is a subfield of ML that utilises neural networks with multiple layers, known as deep neural networks (DNNs), to learn from data. DNNs are inspired by the structure and function of the human brain and are capable of automatically learning high-level features from raw data, such as images, audio and text. DL has been very successful in various applications such as image and speech recognition, natural language processing and computer vision. ML algorithms can be divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning algorithms are trained on a labelled dataset, where the desired output (label) is already known. Unsupervised learning algorithms are trained on an unlabelled dataset and are used to discover patterns or relationships in the data. Reinforcement learning algorithms are trained using a trial-and-error approach, where the agent receives a reward or penalty for its actions. The goal of reinforcement learning is to learn a policy that maximises the expected reward over time. AI and ML are increasingly being applied in the field of obstetrics and gynaecology, with the potential to improve diagnostic accuracy, patient outcomes, and efficiency of care. AI has been applied to the field of medicine for several decades. One of the earliest examples of AI in medicine was the development of MYCIN in the 1970s, a computer program that could diagnose bacterial infections and recommend appropriate antibiotic treatments. MYCIN was developed by a team at Stanford University led by Edward Shortliffe, and its success demonstrated the potential of AI in medical decision making. In the 1980s, AI-based expert systems such as DXplain, developed at Massachusetts General Hospital, were used to assist in the diagnosis of diseases. These early AI systems were based on rule-based systems and were limited in their capabilities. One of the earliest examples of AI was the development of computer-aided diagnostic systems for ultrasound images in the 1970s and 1980s. These systems were designed to assist radiologists in identifying fetal anomalies and other conditions. In recent years, there has been a renewed interest in the use of AI in obstetrics and gynaecology, driven by advances in ML and the availability of large amounts of data. One of the primary areas in which AI and ML are being used in obstetrics and gynaecology is in the analysis of imaging data, such as ultrasound and magnetic resonance imaging. AI algorithms can be trained to automatically identify and classify different structures in the images, such as the placenta or fetal organs, with high accuracy. Another area of focus is the use of AI to predict preterm birth. Researchers have used ML algorithms to analyse data from electronic health records and identify patterns that are associated with preterm birth. By analysing large datasets of patient information and outcomes, AI algorithms can identify patterns and risk factors that may not be apparent to human analysts. This can help to improve the prediction of obstetric outcomes and guide clinical decision making. In recent years, AI has also been applied in obstetrics and gynaecology for real-time monitoring of high-risk pregnancies and identifying fetal distress. These systems use ML algorithms to analyse data from fetal heart rate monitors and identify patterns that are associated with fetal distress. AI and ML are also being used to develop new tools for the management of gynaecological conditions, such as endometriosis and fibroids. These tools can be used to predict the progression of the disease and guide treatment decisions. One example of the use of AI in benign gynaecology is the development of computer-aided diagnostic systems for endometriosis. These systems use ML algorithms to analyse images of the pelvic region and identify the presence of endometrial tissue, which can be a sign of endometriosis. Another area where AI and ML are being applied is in the management of fibroids. ML algorithms are being used to analyse imaging data and predict the growth and behaviour of fibroids, which can aid in the development of personalised treatment plans. In the field of oncology, AI is being used to improve the accuracy and speed of cancer diagnosis. AI algorithms can analyse images of tissue samples to identify the presence of cancer cells and predict the likelihood of a positive outcome following treatment. AI algorithms can be trained to analyse images from pelvic scans and identify signs of ovarian cancer with high accuracy. In addition to these specific applications, AI and ML are also being used to improve the efficiency and organisation of care in obstetrics and gynaecology. For example, by analysing large amounts of clinical data, AI algorithms can be used to identify patients at high risk of complications, prioritise them for care and ensure that they receive the appropriate level of care in a timely manner. AI and ML have the potential to revolutionise the field of fertility and in vitro fertilisation (IVF). By using data from large patient populations, AI and ML algorithms can help identify patterns and predict outcomes that would be difficult for human experts to discern. This can lead to improvements in diagnosis, treatment planning, and overall success rates for patients undergoing IVF. One area where AI and ML are being applied is in the selection of embryos for transfer during IVF. By analysing images of embryos, AI and ML algorithms can predict which embryos are most likely to result in a successful pregnancy. Another area where AI and ML have shown potential is in the optimisation of culture conditions for embryos. This has the potential to improve the survival and development of embryos, leading to higher pregnancy rates. AI and ML are also being used to improve the timing of embryo transfer during IVF. By analysing data from patient medical histories, AI and ML algorithms can predict the optimal time for transfer to increase the chances of successful pregnancies. In addition to these applications, AI and ML are being used in other areas of fertility and IVF to improve patient outcomes. For example, AI and ML are being used to predict the likelihood of ovarian reserve, predict ovulation timing, and improve the efficiency and cost-effectiveness of fertility clinics. AI and ML are rapidly evolving fields that have the potential to revolutionise the field of surgery. These technologies can be used to assist surgeons in a variety of ways, from pre-operative planning to real-time guidance during procedures. One of the key areas where AI and ML are being applied in surgery is in image analysis. For example, algorithms can be used to automatically segment and identify structures in medical images, such as tumours or blood vessels. This can help surgeons plan procedures more accurately and reduce the risk of complications. Another area where AI and ML are being used in surgery is in the development of robotic systems. These systems can be programmed to perform specific tasks, such as suturing or cutting tissue, with a high degree of precision and accuracy. In addition, robotic systems can be equipped with sensors that provide real-time feedback to the surgeon, which can help to improve the outcome of the procedure. These systems can be programmed with advanced algorithms that allow them to make precise incisions, control bleeding, and minimise tissue damage. AI and ML can also be used to improve the efficiency and safety of surgical procedures. For example, algorithms can be trained to analyse data from vital signs monitors, such as heart rate and blood pressure, and alert surgeons to potential complications in real-time. AI and ML are also being used to assist with post-operative care. For example, algorithms can be used to analyse patient data and predict which patients are at risk of complications, such as infection or bleeding, allowing surgeons to take preventative measures. Overall, AI and ML have the potential to significantly improve the field of surgery by increasing accuracy and precision, reducing the risk of complications, and improving patient outcomes. As the technology continues to advance, it is likely that we will see an increasing number of AI-assisted surgical systems and applications in clinical practice. In gynaecology specifically, there is a scarcity of data and diversity in the data. This can lead to AI models that are not generalisable to certain populations or that make incorrect predictions for certain groups of patients. Overall, AI has the potential to improve the diagnosis and management of obstetrics and gynaecology conditions, and many studies have shown that AI systems can perform at least as well as human experts in several areas. However, it is important to note that AI and ML are still in the early stages of development in obstetrics and gynaecology and more research is needed to fully understand their potential benefits and limitations. Some of the key challenges facing the field include developing AI systems that can explain their decisions, improving the robustness of AI systems to adversarial attacks, and developing AI systems that can operate in a wide range of environments. However, it is important to note that AI is a complementary tool to the obstetrics and gynaecology specialist and it is not meant to replace human expertise. The preceding text is entirely a product of an AI system. The preceding review, Artificial Intelligence in Gynaecology: An Overview was composed and written by an evolutionary AI system, ChatGPT (Chat Generative Pre-trained Transformer). ChatGPT is an AI chatbot underpinned by the GPT architecture, an autoregressive language model that uses DL to produce human-like text. The system was trained on a dataset of over 500 GB of text data derived from books, articles, and websites prior to 2021. The system can engage in responsive dialogue, generate computer code, and produce coherent and fluent text.1 ChatGPT was conceived by OpenAI, an AI laboratory based in San Francisco, California, founded by Elon Musk and Sam Altman in 2015. Since its public release on November 30, 2022, the potential for use and misuse has exponentially grown,2 ultimately leading to the prohibition of the utilisation of AI systems by multiple organisations, including schools and universities. Prompted by this interest in AI, the aim of this study was to assess the capacity of ChatGPT to generate a scientific review. In January 2023, a multidisciplinary study group was assembled to develop the study protocol, confirm the methodology and approve the topic. This research was exempt from ethics review under National Health and Medical Research Council guidelines.3 ChatGPT was instructed to generate an narrative review based on dialogue with the lead author, AY. The input was informed by collaborative meetings of the study group over the study period. The study group nominated the topic, 'Artificial Intelligence in Gynaecology', but ChatGPT generated the title, structure and content for this paper. The study group defined the input parameters for ChatGPT and each AI output was reviewed by the authors for consistency and context, informing the next input. The dialogue thus became increasingly specific and refined in each iteration, as the initial general outline was expanded to include specific subheadings, academic language and academic references. The review was finalised from the ChatGPT output through an explicit composition protocol, limiting assembly to cut and paste, deletion to whole sentences (but not words) and conversion to Australian English. No grammatical or syntax correction was performed. The AI output was cross-referenced and verified by the study group. In this study, ChatGPT generated 7112 words in over 15 iterations, including 32 references. The output was restricted to the final review of 1809 words and nine unique references after removing duplicates4 and incorrect references (19). The final paper was submitted for blinded peer review. Thus, this study has demonstrated the capacity of an AI system, such as ChatGPT, to generate a scientific review through human academic instruction. AI is anticipated to expand the boundaries of evidence-based medicine through the potential of comprehensive analysis and summation of scientific publications. However, unlike systematic reviews or meta-analyses governed by explicit methodology, AI systems such as ChatGPT are the product of DL algorithms that are dependent upon the quality of the input to train the AI. Consequently, unlike systematic reviews, AI systems are bound by the bias, breadth, depth and quality of the training material. A dedicated medical AI would therefore be trained on an appropriate data set, such as the National Library of Medicine Medline/PubMed database. However, the volume of data is challenging: in 2022 alone, there were over 33 million citations equating to a dataset of almost 200 Gb for the minimum dataset. In contrast, ChatGPT has no external reference capabilities, such as access to the internet, search engines or any other sources of information outside of its own model. If forced outside of this framework, ChatGPT may generate plausible-sounding but incorrect or nonsensical responses.4 Most notably, pushing the AI to include references leads the system to generate bizarre fabrications.5 Our paper demonstrated that only 28% (9/32) of the references were authentic, although better than the 11% reported in a recent paper.6 In contrast to human writing, AI-generated content is more likely to be of limited depth, contain factual errors, fabricated references and repeat the instructions used to seed the output.7 The latter results in a formulaic language redundancy that all but identifies AI content. The human authors thus echo the conclusion of ChatGPT that AI is a complementary tool to the specialist and not meant to replace human expertise. For the moment. The authors report no conflicts of interest.
- Book Chapter
3
- 10.1007/978-3-031-26845-8_8
- Jan 1, 2023
Machine Learning is a sub-category of Artificial Intelligence enabling computers with the ability of pattern recognition, or to continuously learn from, making predictions based on data, and carry out decisions without being specifically programmed for doing so. In this context, Machine Learning is a broader category of algorithms being able to use datasets to identify patterns, discover insights, and enhance understanding and make decisions or predictions. Compared with Machine Learning, Deep Learning is a particular branch of Machine Learning that makes use of Machine Learning functionality, and moves beyond its capabilities. Deep Learning Algorithm is interpreted as a layered structure that tries to replicate the structure of the human brain. These capabilities enable Machine Learning and Deep Learning Algorithms usage in applications to identify and respond to cybercriminals manifold cyberattacks. This is achieved by analyzing Big Datasets of cybersecurity incidents to identify patterns of malicious activities. For this purpose, Machine Learning and Deep Learning compare known threat event attacks with detected threat event attacks to identify similarities they automatically dealt with trained Machine Learning or Deep Learning model for response. Against this background, this chapter seeks to offer a clear explanation of the classification of Machine Learning and Deep Learning and comparing them with regard to effectivity and efficiency in their specific application domains. This requires (i) discussing the methodological background of Machine Learning and Deep Learning; (ii) introducing relevant application areas of Machine Learning and Deep Learning like Intrusion Detection Systems; and (iii) use cases showing how to combat against threat event attacks based cybersecurity risks. In this context, this chapter provides, in Sect. 8.1, a brief introduction in classical Machine Learning, which consists of Supervised, Unsupervised, and Reinforcement Machine Learning. In this regard, Sect. 8.1.1.1 introduces Supervised Machine Learning, while Sect. 8.1.1.2 refers to Unsupervised Machine Learning, and Sect. 8.1.1.3 focuses on Reinforcement Machine Learning. Sect. 8.1.1.4 finally compares the different Machine Learning methods with regard to advantages and disadvantages. Based on this methodological introduction of classical Machine Learning, Sect. 8.2.1 introduces in Machine Learning and cybersecurity issues. Machine Learning-based intrusion detection in industrial application is therefore the topic of Sect. 8.2.1.1. Section 8.2.1.2 introduces Machine Learning-based intrusion detection based on feature learning, and Machine Learning-based intrusion detection of unknown cyberattacks is the topic of Sect. 8.2.1.3. In Section 8.3, the classification of Deep Learning methods is given which contains in Sect. 8.3.1 the topics Feedforward Deep Neural Networks, Convolutional Feedforward Deep Neural Networks, Recurrent Deep Neural Networks, Deep Beliefs Networks, and the Deep Bayesian Neural Network. Based on this methodological background of Deep Learning methods, Sect. 8.3.2 introduces Deep Bayesian Neural Networks, while Sect. 8.3.3 refers to Deep Learning-based intrusion detection. Finally, Sect. 8.4 refers to Deep Learning methods in cybersecurity applications. Section 8.5 contains comprehensive questions from the topics Machine Learning and Deep Learning, followed by “References” with references for further reading.
- Research Article
29
- 10.1016/j.isci.2022.104814
- Jul 21, 2022
- iScience
Uncertainty quantification: Can we trust artificial intelligence in drug discovery?
- Dissertation
- 10.53846/goediss-6872
- Feb 21, 2022
Context- and Physiology-aware Machine Learning for Upper-Limb Myocontrol
- Research Article
24
- 10.1016/j.tics.2020.09.002
- Oct 8, 2020
- Trends in Cognitive Sciences
Artificial Intelligence and the Common Sense of Animals.
- Dissertation
- 10.25394/pgs.8085005.v1
- Aug 2, 2019
It is a central problem in both statistics and computer science to understand the theoretical foundation of machine learning, especially deep learning. During the past decade, deep learning has achieved remarkable successes in solving many complex artificial intelligence tasks. The aim of this dissertation is to understand deep neural networks (DNNs) and other nonparametric methods in machine learning. In particular, three machine learning models have been studied: weight normalized DNNs, sparse DNNs, and the compositional nonparametric model.The first chapter presents a general framework for norm-based capacity control for Lp,q weight normalized DNNs. We establish the upper bound on the Rademacher complexities of this family. Especially, with an L1,infty normalization, we discuss properties of a width-independent capacity control, which only depends on the depth by a square root term. Furthermore, if the activation functions are anti-symmetric, the bound on the Rademacher complexity is independent of both the width and the depth up to a log factor. In addition, we study the weight normalized deep neural networks with rectified linear units (ReLU) in terms of functional characterization and approximation properties. In particular, for an L1,infty weight normalized network with ReLU, the approximation error can be controlled by the L1 norm of the output layer.In the second chapter, we study L1,infty-weight normalization for deep neural networks with bias neurons to achieve the sparse architecture. We theoretically establish the generalization error bounds for both regression and classification under the L1,infty-weight normalization. It is shown that the upper bounds are independent of the network width and k1/2-dependence on the network depth k. These results provide theoretical justifications on the usage of such weight normalization to reduce the generalization error. We also develop an easily implemented gradient projection descent algorithm to practically obtain a sparse neural network. We perform various experiments to validate our theory and demonstrate the effectiveness of the resulting approach.In the third chapter, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of 2k+1 nodes, where each node is either a summation, a multiplication, or the application of one of the q basis functions to one of the m1 covariates. We show that in order to recover a labeled binary tree from a given dataset, the sufficient number of samples is O(k log(m1q)+log(k!)), and the necessary number of samples is Omega(k log(m1q)-log(k!)). We further propose a greedy algorithm for regression in order to validate our theoretical findings through synthetic experiments.
- Research Article
69
- 10.3390/jcm11195772
- Sep 29, 2022
- Journal of Clinical Medicine
Background: It is important to be able to predict, for each individual patient, the likelihood of later metastatic occurrence, because the prediction can guide treatment plans tailored to a specific patient to prevent metastasis and to help avoid under-treatment or over-treatment. Deep neural network (DNN) learning, commonly referred to as deep learning, has become popular due to its success in image detection and prediction, but questions such as whether deep learning outperforms other machine learning methods when using non-image clinical data remain unanswered. Grid search has been introduced to deep learning hyperparameter tuning for the purpose of improving its prediction performance, but the effect of grid search on other machine learning methods are under-studied. In this research, we take the empirical approach to study the performance of deep learning and other machine learning methods when using non-image clinical data to predict the occurrence of breast cancer metastasis (BCM) 5, 10, or 15 years after the initial treatment. We developed prediction models using the deep feedforward neural network (DFNN) methods, as well as models using nine other machine learning methods, including naïve Bayes (NB), logistic regression (LR), support vector machine (SVM), LASSO, decision tree (DT), k-nearest neighbor (KNN), random forest (RF), AdaBoost (ADB), and XGBoost (XGB). We used grid search to tune hyperparameters for all methods. We then compared our feedforward deep learning models to the models trained using the nine other machine learning methods. Results: Based on the mean test AUC (Area under the ROC Curve) results, DFNN ranks 6th, 4th, and 3rd when predicting 5-year, 10-year, and 15-year BCM, respectively, out of 10 methods. The top performing methods in predicting 5-year BCM are XGB (1st), RF (2nd), and KNN (3rd). For predicting 10-year BCM, the top performers are XGB (1st), RF (2nd), and NB (3rd). Finally, for 15-year BCM, the top performers are SVM (1st), LR and LASSO (tied for 2nd), and DFNN (3rd). The ensemble methods RF and XGB outperform other methods when data are less balanced, while SVM, LR, LASSO, and DFNN outperform other methods when data are more balanced. Our statistical testing results show that at a significance level of 0.05, DFNN overall performs comparably to other machine learning methods when predicting 5-year, 10-year, and 15-year BCM. Conclusions: Our results show that deep learning with grid search overall performs at least as well as other machine learning methods when using non-image clinical data. It is interesting to note that some of the other machine learning methods, such as XGB, RF, and SVM, are very strong competitors of DFNN when incorporating grid search. It is also worth noting that the computation time required to do grid search with DFNN is much more than that required to do grid search with the other nine machine learning methods.
- Research Article
1
- 10.1049/el.2019.2376
- Sep 1, 2019
- Electronics Letters
GenSyth: a new way to understand deep learning
- Book Chapter
1
- 10.1016/b978-0-323-89925-3.00002-2
- Jan 1, 2023
- A Handbook of Artificial Intelligence in Drug Delivery
Chapter 2 - General considerations on artificial intelligence
- Book Chapter
- 10.1007/978-3-030-90708-2_6
- Jan 1, 2022
Machine learning (ML) and artificial intelligence (AI) methods are some of the latest advancements in the field of computing. Among these methods, there are nature-inspired techniques such as deep learning and deep neural networks, which are inspired from the neural networks of the human brain. These methods are applicable towards the security of networks and network-connected machines from malware, intrusion, and other cyberattacks. For example, in dealing with modern cyber threats, some standard ML and AI methods that can be useful are malicious code recognition for malware analysis, object-based modeling to classify security threats, and heuristic rule systems for intrusion detection. In this way, ML and AI can play a key role in cyber threat detection and prevention. Due to the large amounts of data packets passing through a network, processing and parsing through that data to find malware, intrusion, or other malicious code and files can be overwhelmingly difficult for humans. Machine learning models can be trained to detect malicious patterns in data or files and can thus be used to automatically detect malware or intrusive activity. Additionally, humans are limited in terms of the amount of time or duration that they can spend, but once programmed, a machine learning model can continue running and operating nonstop to detect and prevent malicious code and files from entering a network-connected system. This can reduce human effort and minimize human error by automating the computing required to detect and thwart cyberattacks. This paper surveys and reviews different AI and ML methods that have been used in past literature for cybersecurity applications. The goal of this work is to aid cybersecurity researchers and professionals on how to employ AI and ML techniques for cybersecurity applications, such as malicious code detection, intrusion detection, and cyber threat analysis.
- Research Article
24
- 10.1007/978-1-0716-0826-5_7
- Aug 18, 2020
- Methods in molecular biology (Clifton, N.J.)
While the term artificial intelligence and the concept of deep learning are not new, recent advances in high-performance computing, the availability of large annotated data sets required for training, and novel frameworks for implementing deep neural networks have led to an unprecedented acceleration of the field of molecular (network) biology and pharmacogenomics. The need to align biological data to innovative machine learning has stimulated developments in both data integration (fusion) and knowledge representation, in the form of heterogeneous, multiplex, and biological networks or graphs. In this chapter we briefly introduce several popular neural network architectures used in deep learning, namely, the fully connected deep neural network, recurrent neural network, convolutional neural network, and the autoencoder. Deep learning predictors, classifiers, and generators utilized in modern feature extraction may well assist interpretability and thus imbue AI tools with increased explication, potentially adding insights and advancements in novel chemistry and biology discovery.The capability of learning representations from structures directly without using any predefined structure descriptor is an important feature distinguishing deep learning from other machine learning methods and makes the traditional feature selection and reduction procedures unnecessary. In this chapter we briefly show how these technologies are applied for data integration (fusion) and analysis in drug discovery research covering these areas: (1) application of convolutional neural networks to predict ligand-protein interactions; (2) application of deep learning in compound property and activity prediction; (3) de novo design through deep learning. We also: (1) discuss some aspects of future development of deep learning in drug discovery/chemistry; (2) provide references to published information; (3) provide recently advocated recommendations on using artificial intelligence and deep learning in -omics research and drug discovery.
- Research Article
16
- 10.1097/corr.0000000000001679
- Feb 17, 2021
- Clinical orthopaedics and related research
CORR Synthesis: When Should the Orthopaedic Surgeon Use Artificial Intelligence, Machine Learning, and Deep Learning?
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.