Disease Diagnosis Systems Using Machine Learning and Deep learning Techniques Based on TensorFlow Toolkit: A review
Machine learning and deep learning algorithms have become increasingly important in the medical field, especially for diagnosing disease using medical databases. Techniques developed within these two fields are now used to classify different diseases. Although the number of Machine Learning algorithms is vast and increasing, the number of frameworks and libraries that implement them is also vast and growing. TensorFlow is a well-known machine learning library that has been used by several researchers in the field of disease classification. With the help of TensorFlow (Google's framework), a complex calculation can be addressed effectively by modeling it as a graph and properly mapping the graph segments to the machine in the form of a cluster. In this review paper, the role of the TensorFlow-Python framework- for disease classification is discussed.
- Research Article
5
- 10.1080/23279095.2024.2382823
- Jul 31, 2024
- Applied Neuropsychology: Adult
The cognitive impairment known as dementia affects millions of individuals throughout the globe. The use of machine learning (ML) and deep learning (DL) algorithms has shown great promise as a means of early identification and treatment of dementia. Dementias such as Alzheimer’s Dementia, frontotemporal dementia, Lewy body dementia, and vascular dementia are all discussed in this article, along with a literature review on using ML algorithms in their diagnosis. Different ML algorithms, such as support vector machines, artificial neural networks, decision trees, and random forests, are compared and contrasted, along with their benefits and drawbacks. As discussed in this article, accurate ML models may be achieved by carefully considering feature selection and data preparation. We also discuss how ML algorithms can predict disease progression and patient responses to therapy. However, overreliance on ML and DL technologies should be avoided without further proof. It’s important to note that these technologies are meant to assist in diagnosis but should not be used as the sole criteria for a final diagnosis. The research implies that ML algorithms may help increase the precision with which dementia is diagnosed, especially in its early stages. The efficacy of ML and DL algorithms in clinical contexts must be verified, and ethical issues around the use of personal data must be addressed, but this requires more study.
- Research Article
2
- 10.15678/znuek.2018.0978.0603
- Jan 1, 2018
- Zeszyty Naukowe Uniwersytetu Ekonomicznego w Krakowie
Insolvency prediction is one of the crucial abilities in corporate finance and financial management. It is critical in accounts receivable management, capital budgeting decisions, financial analysis, capital structure management, going concern assessment and co-operation with other companies. The purpose of this paper is to compare the efficiency of selected deep learning and machine learning algorithms trained on a representative sample of Polish companies for the period 2008–2017. In particular, the paper tested the following popular machine learning algorithms: discriminant analysis (DA), logit (L), support vector machines (SVM), random forest (RF), gradient boosting decision trees (GB), neural network with one hidden layer (NN), convolutional neural network (CNN), and naïve Bayes (NB). The research hypotheses evaluated in the paper state that if one has access to a large sample of companies, the most accurate algorithm (first choice) in bankruptcy prediction will be gradient boosting decision trees (H1), random forest (H2) and neural networks (H3) (deep learning) algorithms. The initial hypotheses were formulated based on the practitioners’ opinions regarding the usefulness of various machine learning and artificial intelligence algorithms in bankruptcy prediction. As the results of the research suggest, both deep learning and machine learning algorithms proved to have very comparable efficiency. The new factor introduced in the paper was that the training of the models was carried out on a representative sample of companies (for years 2008–2013) and also the testing phase used a significant number of bankrupt and active companies (validation included a completely different set of companies than those used in the training phase: data were taken from a different time period, 2014–2017, and companies in both sets were also completely different).
- Book Chapter
1
- 10.1007/978-981-19-5443-6_45
- Jan 1, 2023
The aim of this paper is to compare and contrast between deep learning (DL) and various machine learning (ML) algorithms for fungi classification. The Danish Fungi data set provided by Kaggle, for this study. Only, 10 classes from the provided data set were extracted which consists of 1775 images. In this work, the used machine learning techniques are decision tree (DT), Naive Bayes (NB), K-nearest neighbour (KNN) and random forest tree (RFT) and achieved accuracies of 25, 28, 29 and 33%, respectively. The reason for low accuracies for the machine learning algorithms is because machine learning algorithms are usually used for numerical data and not suitable for images. Deep learning model using Keras was used to achieve an accuracy of 75.82%. On comparing the quantitative metrics like precision, recall, f1-scores, it can be concluded that deep learning algorithms are much better than machine learning algorithms.
- Research Article
15
- 10.1097/corr.0000000000001679
- Feb 17, 2021
- Clinical orthopaedics and related research
CORR Synthesis: When Should the Orthopaedic Surgeon Use Artificial Intelligence, Machine Learning, and Deep Learning?
- Conference Article
1
- 10.1109/icscds53736.2022.9760818
- Apr 7, 2022
Wind energy being a notable and eligible source, has the possibility for bringing out energy in a very constant and sustainable manner. However, wind energy does include numerous challenges like, the halted asset of wind plants, early investment costs, and the strain in discovering areas of wind efficiency. The major objective for proposing this work is to determine the power efficiency of wind turbines, which also aids in the formulation of a proposal to reduce wind turbine maintenance costs. During this research, data analysis of turbine generators is performed on day-to-day wind speed info using machine learning and deep learning algorithms. A way is put forward to support deep learning and machine learning algorithms which can predict different values of power reliably. Hence, the execution of machine and deep learning algorithms are analyzed. For forecasting for a longer term, these algorithms may be used for wind generation rate with historical relation to wind speed info. Moreover, the application of deep and machine learning-based models is place distinct to that of model-trained places. This data analysis demonstrates that in unspecified geographies of wind plants, these sets of algorithms could be successfully implied by utilizing the base location model. The entire project focuses on wind turbine generators and includes the use of data visualization of data analytics to analyze the data and detect the factors that influence wind power generation. With the support of previous data output, wind power is anticipated using both machine learning and deep learning models, where different datasets are used for training and testing. This adds to the uniqueness of this work.
- Research Article
- 10.55041/ijsrem27894
- Jan 4, 2024
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Wind energy has the possibility for bringing out energy in a very constant and sustainable manner being a notable and eligible source. Although, wind energy does include numerous challenges like, the halted asset of wind plants, early investment costs, and therefore, the strain in discovering areas of wind efficiency. The major objective for proposing this work is to determine the power efficiency of wind turbines, which will also aid in the formulation of a proposal to reduce wind turbine maintenance costs. During this research, data analysis of turbine generators is performed on day to day wind speed info using machine learning and deep learning algorithms. A way is put forward by us to support deep learning and machine learning algorithms which can predict different values of power reliably. Hence, the execution of machine and deep learning algorithms are analyzed. For forecasting for a longer term these algorithms may be used for wind generation rate with historical relation to wind speed info. Index Terms: Wind turbine,machine learning algorithm
- Research Article
45
- 10.1007/s10661-024-12454-z
- Feb 24, 2024
- Environmental Monitoring and Assessment
Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
- Book Chapter
- 10.1007/978-981-19-2821-5_59
- Sep 27, 2022
The main objective of this research is to analyze and compare the performance of machine learning (ML) and deep learning (DL) algorithms in detecting online hate speech. Therefore, Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), Logistic Regression (LR), Convolution Neural Network (CNN), Recurrent Neural Network_Long Short-Term Memory (RNN_LSTM), BERT (Bidirectional Encoder Representations from Transformers), and Distil BERT algorithms have been explored and analyzed in this research. This research has applied the dataset on hate speech which was developed by Andry Samoshyn which is publicly available in Kaggle. ML algorithms and DL algorithms have got good scores in accuracy. In ML, SVM, RF, and LR have got top accuracy values. In DL algorithms, RNN_LSTM, Distil BERT, and BERT have performed well in accuracy. Based on F-measurement, DL classifiers have outperformed ML algorithms. Distil BERT has obtained the highest F-measurement scores. When we compare the overall performances, DL is performed well rather than ML in detecting hate speech. Especially transformer-based models of DL are more efficient than other DL and ML algorithms.KeywordsHate speechMachine learningDeep learning TwitterAnd performance comparison
- Research Article
- 10.1007/s00198-025-07541-x
- Jun 10, 2025
- Osteoporosis international : a journal established as result of cooperation between the European Foundation for Osteoporosis and the National Osteoporosis Foundation of the USA
Machine learning drives osteoporosis detection and screening with higher clinical accuracy and accessibility than traditional osteoporosis screening tools. This review takes a step-by-step view of machine learning for osteoporosis detection, providing insights into today's osteoporosis detection and the outlook for the future. The early diagnosis and risk detection of osteoporosis have always been crucial and challenging issues in the medical field. With the in-depth application of artificial intelligence technology, especially machine learning technology in the medical field, significant breakthroughs have been made in the application of early diagnosis and risk detection of osteoporosis. Machine learning is a multidimensional technical system that encompasses a wide variety of algorithm types. Machine learning algorithms have become relatively mature and developed over many years in medical data processing. They possess stable and accurate detection performance, laying a solid foundation for the detection and diagnosis of osteoporosis. As an essential part of the machine learning technical system, deep-learning algorithms are complex algorithm models based on artificial neural networks. Due to their robust image recognition and feature extraction capabilities, deep learning algorithms have become increasingly mature in the early diagnosis and risk assessment of osteoporosis in recent years, opening new ideas and approaches for the early and accurate diagnosis and risk detection of osteoporosis. This paper reviewed the latest research over the past decade, ranging from relatively basic and widely adopted machine learning algorithms combined with clinical data to more advanced deep learning techniques integrated with imaging data such as X-ray, CT, and MRI. By analyzing the application of algorithms at different stages, we found that these basic machine learning algorithms performed well when dealing with single structured data but encountered limitations when handling high-dimensional and unstructured imaging data. On the other hand, deep learning can significantly improve detection accuracy. It does this by automatically extracting image features, especially in image histological analysis. However, it faces challenges. These include the "black-box" problem, heavy reliance on large amounts of labeled data, and difficulties in clinical interpretability. These issues highlighted the importance of model interpretability in future machine learning research. Finally, we expect to develop a predictive model in the future that combines multimodal data (such as clinical indicators, blood biochemical indicators, imaging data, and genetic data) integrated with electronic health records and machine learning techniques. This model aims to present a skeletal health monitoring system that is highly accessible, personalized, convenient, and efficient, furthering the early detection and prevention of osteoporosis.
- Research Article
3
- 10.4018/ijban.298014
- Apr 6, 2022
- International Journal of Business Analytics
Up to the present, various methods such as Data Mining, Machine Learning, and Artificial Intelligence have been used to get the best assess from huge and important data resource. Deep Learning, one of these methods, is extended version of Artificial Neural Networks. Within the scope of this study, a model has been developed to classify the success of tele-marketing with different machine learning algorithms especially with Deep Learning algorithm. Naïve Bayes, C5.0, Extreme Learning Machine and Deep Learning algorithms have been used for modelling. To examine the effect of class label distribution on model success, Synthetic Minority Oversampling Technique have been used. The results have revealed the success of Deep Learning and Decision Trees algorithms. When the data set was not balanced, the Deep Learning algorithm performed better in terms of sensitivity. Among all models, the best performance in terms of accuracy, precision and F-score have been achieved with the C5.0 algorithm.
- Book Chapter
- 10.1049/pbse016e_ch4
- Aug 24, 2022
Owing to recent development in technology, major changes have been noticed in human being's life. Today's lives of human being are becoming more convenient (i.e., in terms of living standard). In current real-world applications, we have shifted our attention from wired devices to wireless devices. As a result, we moved into the era of smart technology, where a lot of Internet devices are connected together in a distributed and decentralized manner. Such Internet-connected devices (ICDs) or Internet of Things (IoTs) engender tremendous data (i.e., via communicating other smart devices). With the tremendous increase in the amount of data, there is a higher requirement to process this huge amount of data (generated through billions of ICDs) using efficient machine learning (ML) algorithms.In the past decade, we refer data mining algorithms to make some decision from collected data-sets. But, due to increasing data on a large scale, data mining fail to handle this data. So, as substitute of data mining algorithms and to refine this information in an efficient manner, we require tradition analytics algorithms, i.e., ML or data mining algorithms. In current scenario, some of the ML algorithms (available to analysis this data) are supervised (used with labeled data), unsupervised (used with unlabelled data) and semi-supervised (work as reward-based learning). Supervised learning algorithms are like linear regression, classification and k-nearest neighbor (KNN), etc. Whereas, unsupervised learning algorithms are clustering, k-means, etc. In general, ML focuses on building the systems that learn and hence improves with the knowledge and experience. Being the heart of artificial intelligence (AI) and data science, ML is gaining popularity day by day. Several algorithms have already been developed (in the past decade) for processing of data, although this field focuses on developing new learning algorithm for big data computability with minimum complexity (i.e., in terms of time and space). ML algorithms are not only applicable to computer science field but also extend to medical, psychological, marketing, manufacturing, automobile, etc.On another side, Big Data including deep learning are the two primary and highly demandable fields of data science. A subset of ML, computer vision or AI, deep learning is used here. The large (or massive) amount of data related to a specific domain which forms Big Data (in form of 5 V's like velocity, volume, value, variety, and veracity) contains valuable information related to various fields like marketing, automobile, finance, cyber security, medical, fraud detection, etc. Such real-world applications are creating a lot of information every day. The valuable (i.e., needful or meaningful) information are required to be processed (or retrieved) from analysis of this unstructured/ large amount of data for further processing of the data for future use (or for prediction). Big organizations have to accord with the tremendous volume of data for prediction, classification, decision making, etc. The use of ML algorithms for big data analytics, which extracts the high-level semantics from the valuable (meaningful) information form the data. It uses hierarchical process for efficient processing and retrieving the complex abstraction from the data.Hence, this chapter discusses several algorithms of ML, to analysis of Big Data. Also, the subset AI like ML algorithms, deep learning algorithms are being discussed here (i.e., to analysis this Big Data for efficient prediction). Later, this chapter focuses on benefits of ML, deep learning algorithms in analyzing tremendous volume of data (i.e., in unsupervised or unstructured form) for numerous complex problems like information retrieval, medical diagnosis, cognitive science, indexing using semantic analysis, data tagging, speech recognition, natural language processing, etc. Also, weakness, raised issues, and challenges (during analysis big data) using (in) ML or deep learning have been discussed in detail. In other words, research gaps in using ML, deep learning algorithms for big data will also be discussed (covering future research aspects/trends). Finally, this chapter discusses the significance of the smart era, computational intelligence, and AI in depth.
- Research Article
3
- 10.21271/zjpas.34.2.3
- Apr 12, 2022
- ZANCO JOURNAL OF PURE AND APPLIED SCIENCES
Comprehensive Study for Breast Cancer Using Deep Learning and Traditional Machine Learning
- Research Article
26
- 10.1016/j.compeleceng.2023.108691
- Mar 22, 2023
- Computers and Electrical Engineering
Nine novel ensemble models for solar radiation forecasting in Indian cities based on VMD and DWT integration with the machine and deep learning algorithms
- Research Article
- 10.1093/eurheartj/ehab724.3069
- Oct 12, 2021
- European Heart Journal
ACS mortality prediction in Asian in-hospital patients with deep learning using machine learning feature selection
- Book Chapter
2
- 10.1007/978-3-030-74761-9_21
- Jul 28, 2021
The proposed book chapter focuses on development of an efficient AI based medical imaging solution for COVID-19 by leveraging the easily available COVID X-Ray Images (CXR). For this the experimentation with different deep learning and machine learning algorithms is performed. A convolution network (CNN) is one of the widely used deep learning algorithms used for medical imaging systems. In this chapter different variants of CNN are used. Data augmentation and dropout techniques are used to avoid overfitting. Among these different variants, CNN with ten convolutional layers and 15 epochs has given the best performance of 88.23% training 85.94% validation accuracy. This is followed by the use of Alexnet for feature extraction from CXR images and the extracted features are given as the input to different machine learning classifiers including Gaussian Naïve Bays, K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boost, Ada Boost for classification of input images among the classes of Covid-19, No Finding, Pneumonia. Among these machine learning classifiers, SVM has given the best performance of 86% testing accuracy. Thus, the deep learning algorithms have proven to give satisfactory performance. This performance can still be improved with the help of more images and by fine tuning the pre trained models. In addition, this chapter highlights the importance of transfer learning, brief description about medical imaging.
- Ask R Discovery
- Chat PDF