Machine Learning and Soil Humidity Sensing: Signal Strength Approach
The Internet-of-Things vision of ubiquitous and pervasive computing gives rise to future smart irrigation systems comprising the physical and digital worlds. A smart irrigation ecosystem combined with Machine Learning can provide solutions that successfully solve the soil humidity sensing task in order to ensure optimal water usage. Existing solutions are based on data received from the power hungry/expensive sensors that are transmitting the sensed data over the wireless channel. Over time, the systems become difficult to maintain, especially in remote areas due to the battery replacement issues with a large number of devices. Therefore, a novel solution must provide an alternative, cost- and energy-effective device that has unique advantage over the existing solutions. This work explores the concept of a novel, low-power, LoRa-based, cost-effective system that achieves humidity sensing using Deep Learning techniques that can be employed to sense soil humidity with high accuracy simply by measuring the signal strength of the given underground beacon device.
- Research Article
- 10.9734/jsrr/2026/v32i24008
- Feb 21, 2026
- Journal of Scientific Research and Reports
Traditional irrigation practices in banana cultivation often result in inefficient water use, reduced productivity and higher environmental costs. With increasing challenges posed by climate variability and resource scarcity, smart irrigation technologies have emerged as a sustainable solution. This review article synthesises current research on the integration of Internet of Things (IoT), Machine Learning (ML) and Deep Learning (DL) in banana irrigation management. IoT sensors, including soil moisture, temperature, humidity and flow meters, enable real-time data collection and precise water delivery systems. ML algorithms such as regression models, random forests and Support Vector Machines (SVMs) support predictive irrigation scheduling and water requirement forecasting. DL techniques, particularly Convolutional Neural Networks (CNNs), are increasingly applied in image-based monitoring for detecting water stress and disease, integrated with drones and satellite imagery. IoT–ML–DL frameworks, supported by cloud computing and mobile applications, that create automated, data-driven irrigation architectures. Reported benefits include improved water-use efficiency, enhanced banana yield and fruit quality, reduced input costs, environmental conservation and labour savings. Nonetheless, challenges remain in cost, scalability, connectivity in remote regions, data quality and farmer training. Future research and development should focus on affordable sensor networks, AI-driven predictive tools and capacity building to ensure widespread adoption.
- Research Article
3
- 10.1002/ima.22905
- May 11, 2023
- International Journal of Imaging Systems and Technology
COVID‐19 has affected more than 760 million people all over the world, as per the latest record of the WHO. The rapid proliferation of COVID‐19 patients not only created a health emergency but also led to an economic crisis. An early and accurate diagnosis of COVID‐19 can help in combating this deadly virus. In line with this, researchers have proposed several machine learning (ML) and deep learning (DL) techniques for detecting COVID‐19 since 2020. This article presents currently available manual diagnosis methods along with their limitations. It also provides an extensive survey of ML and DL techniques that can support medical professionals in the precise diagnosis of COVID‐19. ML methods, namely K‐nearest neighbor, support vector machine (SVM), artificial neural network, decision tree, naive bayes, and DL methods, viz. deep neural network, convolutional neural network (CNN), region‐based convolutional neural network, and long short‐term memories, are explored. It also provides details of the latest COVID‐19 open‐source datasets, consisting of x‐ray and computed tomography scan images. A comparative analysis of ML and DL techniques developed for COVID‐19 detection in terms of methodology, datasets, sample size, type of classification, performance, and limitations is also done. It has been found that SVM is the most frequently used ML technique, while CNN is the most commonly used DL technique for COVID‐19 detection. The challenges of an existing dataset have been identified, including size and quality of datasets, lack of labeled datasets, severity level, data imbalance, and privacy concerns. It is recommended that there is a need to establish a benchmark dataset that overcomes these challenges to enhance the effectiveness of ML and DL techniques. Further, hurdles in implementing ML and DL techniques in real‐time clinical settings have also been highlighted. In addition, the motivation noticed from the existing methods has been considered for extending the research with an optimized DL model, which attained improved performance using statistical and deep features. The optimized deep model performs better than 90% based on efficient features and proper classifier tuning.
- Research Article
- 10.1051/itmconf/20257403008
- Jan 1, 2025
- ITM Web of Conferences
The primary thoughts, perceptions, attitudes, feedback, and even emotions expressed by people on social networking and e-commerce sites are the primary focus of sentiment analysis also referred to as opinion mining. It provides meaningful information to various stakeholders and customers in influencing their next move. However, the biggest challenge is the extraction of relevant information from the tremendous data. Machine learning and deep learning techniques have obtained remarkable success in exemplifying and classifying information. Machine learning works with the binary classification of information, whereas deep learning provides automatic feature detection. A study was carried out to extract the relevant information from the Amazon reviews dataset of electronics products. The Naïve Bayes, support vector machine, decision tree, convolution neural network, long short term memory, recursive neural networks, and recurrent neural networks were used on the dataset after applying different data preprocessing. To evaluate the performance of various machine learning and deep learning techniques, frameworks, F1 score, precision, recall as well as, accuracy was used. The results suggest that deep learning techniques have outperformed the machine learning techniques, and RNN shows the highest accuracy among all the techniques.
- Research Article
- 10.47392/irjaem.2025.0346
- Jun 4, 2025
- International Research Journal on Advanced Engineering and Management (IRJAEM)
Optimized water management in agriculture is a critical issue, especially in water-scarce regions. This work introduces an IoT-based Automated Irrigation System that utilizes real-time sensor feedback, machine learning, and weather forecasting to manage water efficiently. The system utilizes Node-RED and HiveMQ(MQTT) for convenient communication and control, employing ESP8266 microcontrollers to interface with soil moisture, temperature, and humidity sensors. Moreover, external weather data is retrieved through the Open Weather API to enhance irrigation scheduling accuracy. The machine learning model trained to predict the irrigation need based on environmental and sensor inputs allows the system to automate motor operation with minimal human intervention. The model learns and adapts continuously to changing climate patterns and soil types, thus improving reliability and efficiency. This method not only saves water but also aids in sustainable agriculture. The system has been proven in a laboratory setting with encouraging results, showing that it can be scaled up for deployment in smart farming.
- Book Chapter
27
- 10.1016/b978-0-323-85209-8.00007-9
- Jan 1, 2022
- Machine Learning for Biometrics
Chapter 10 - Contemporary survey on effectiveness of machine and deep learning techniques for cyber security
- Book Chapter
- 10.1007/978-3-030-92087-6_35
- Jan 1, 2022
Machine learning (ML) and deep learning (DL) techniques have been increasingly applied to help diagnose coronary artery disease (CAD) as well as help with patient management decisions. Imaging has begun to play a larger role in these studies. Cardiovascular magnetic resonance (CMR) offers multiple techniques to diagnose CAD, and ML and DL have been used with these techniques in an effort to improve both the image quality and the speed of image interpretation. In particular, ML and DL have been applied to direct imaging of coronary vessel anatomy, imaging of coronary flow, and myocardial perfusion imaging. In applications aimed at imaging the coronary artery anatomy, ML and DL techniques have been used to improve image quality in reconstruction, improve the speed of reconstruction, allow for more sparse sampling of data, and enable automated evaluation of image quality. In applications of coronary flow imaging, ML and DL techniques have been used to reduce the uncertainty of phase-contrast measurements of blood velocity and flow, and physics-informed neural networks have been used to improve the modeling of flow based on both acquired image data and natural laws of motion. In myocardial perfusion imaging, ML and DL techniques have been used at multiple steps in the image analysis process to automate quantitative blood flow measurements, including motion correction, image registration, tracer kinetic modeling, and detection of perfusion defects. Future applications of ML and DL in evaluating CAD are expected to continue to develop with increasing impact in both diagnosis and patient management.KeywordsMagnetic resonanceCoronary arteriesCoronary artery diseaseCoronary flowMachine learningDeep learning
- Book Chapter
- 10.1201/9781003138037-4
- Nov 3, 2021
Rapid advancements in communication technology have supported the invention of various internet-based devices. These devices communicate with one another and provide data from the physical world. Nowadays, the internet connected devices are used in various fields to make things easier. A great number of devices has been used, depending upon requirements. At the same time, the data produced by such devices is gradually increasing. To process the collected data, machine learning and deep learning techniques are applied. The Internet of Things (IoT) produces big datasets with multiple modalities but also a range of data with different quality standards. It is an important but also a challenging task to process all of the data within a certain time-frame. In this scenario, cloud computing gives us the optimal solution since the data generated is sent to distant cloud infrastructures. In addition to the cloud technology, machine learning (ML) and deep learning (DL) techniques are integrated with cloud computing to improve the effectiveness. In ML technique, the training data is given for learning to generate a set of rules from inferences on the data. Huge amounts of data that has been stored in the cloud gives input to DL techniques. DL architecture has been derived from the Artificial Neural Network (ANN) that uses multiple layers of nonlinear processing and transformation. The deep learning approach uses unknown elements in the input data to group objects, generate features and find new data patterns to build the model.
- Research Article
75
- 10.1016/j.egyr.2023.08.009
- Aug 16, 2023
- Energy Reports
Prediction of oil and gas pipeline failures through machine learning approaches: A systematic review
- Conference Article
8
- 10.4043/35037-ms
- Apr 29, 2024
This study aims to present a novel approach for estimating relative permeability curves using Machine Learning (ML) and Deep Learning (DL) techniques based on production data. This method aims to overcome the shortage in core data availability, which is much needed for reservoir simulation. The adopted approach involves devising an algorithm utilizing different methodologies employing synthetic simulation data. The water-oil relative permeability curves correlated with various parameters such as production data and reservoir parameters. Subsequently, the model was applied to real field data to predict relative permeability curves. The procedural framework involves data compilation, model training, application to real field data, and integration with reservoir simulation. In data compilation, synthetic simulation outcomes, including water-oil relative permeability curves, oil and water rates, flowing pressures, and diverse reservoir characteristics are gathered. Model training focuses on developing an ML model proficient in discerning complex relationships between production parameters and relative permeability curves. Following this, the model’s accuracy is rigorously assessed through validation, involving data not used during the training or the validation process. In the application to field data, the trained model is employed to estimate relative permeability curves using authentic production data. Finally, integration with reservoir simulation entails assimilating the estimated curves into simulations to enhance the process of initial history matching. The methodology encompasses the generation of synthetic data to train the Machine Learning model, validation of its performance, application to real field data, and the utilization of the estimated relative permeability curves in reservoir simulations. This process is particularly crucial in situations where core data is non-existent or only limited data points are available. The use of such data contributes significantly to the improvement of history-matching accuracy. The outcomes demonstrate the effectiveness of the employed methodology, highlighting the successful utilization of ML and DL to precisely predict relative permeability curves from production data. The integration of these curves into reservoir simulations yields a notable improvement in the accuracy of the history-matching process in a short time compared with the traditional approaches. The observations elucidate the adeptness of ML or DL in capturing intricate relationships between production parameters and relative permeability curves. The estimated curves serve as a pivotal initial step for history matching, playing a crucial role in substantially mitigating uncertainty within reservoir simulations. In conclusion, this study introduces a new approach for predicting relative permeability curves by leveraging ML and DL integrated with production data. This method contributes to addressing the uncertainties linked to traditional core-based measurements by furnishing a more accurate initial prediction for relative permeability curves. The successful integration of this approach into reservoir simulations holds the promise of streamlining and enhancing the accuracy of reservoir management practices. The emphasis on utilizing the available production data enhances our capability to mitigate the scarcity of Special Core Analysis (SCAL) data. Consequently, this methodology contributes to refined reservoir simulation outcomes and more efficient history-matching processes.
- Book Chapter
2
- 10.4018/978-1-6684-5673-6.ch003
- Oct 21, 2022
Machine learning (ML) and deep learning (DL) techniques play a significant role in diabetic retinopathy (DR) detection via grading the severity levels or segmenting the retinal lesions. High sugar levels in the blood due to diabetes causes DR, a leading cause of blindness. Manual detection or grading of the DR requires ophthalmologists' expertise and consumes time prone to human errors. Therefore, using fundus images, the ML and DL algorithms help automatic DR detection. The fundus imaging analysis helps the early DR detection, controlling, and treatment evaluation of DR conditions. Knowing the fundus image analysis requires a strong knowledge of the system and ML and DL functionalities in computer vision. DL in fundus imaging is a rapidly expanding research area. This chapter presents the fundus images, DR, and its severity levels. Also, this chapter explains the performance analysis of the various ML DL-based DR detection techniques. Finally, the role of ML and DL techniques in DR detection or severity grading is discussed.
- Research Article
5
- 10.11591/ijeecs.v35.i2.pp1244-1252
- Aug 1, 2024
- Indonesian Journal of Electrical Engineering and Computer Science
In today’s digital world, Android phones play a vital part in a variety of facets of both professionals and individuals’ personal and professional lives. Android phones are great for getting things done faster and more organized. The proportionate increase in the number of malicious applications has also been seen to be expanding. Since the play store offers millions of apps, detection of malware apps is challenging task. In this paper, a methodology is introduced for detecting malware in Android applications through the utilization of global image shape transform (GIST) features extracted from grayscale images of the applications. The dataset comprises samples of both malware and benign apps collected from the virus share website. After converting the apps into grayscale images, GIST features are extracted to capture their global spatial layout. Various machine learning (ML) algorithms, such as logistic regression (LR), k-nearest neighbour (KNN), AdaBoost, decision tree (DT), Naïve Bayes (NB), random forest (RF), support vector machine (SVM), extra tree classifier (ETC), and gradient boosting (GB), are employed to classify the applications according to their GIST features. Furthermore, a feed forward neural network (FFNN) is utilized as a deep learning (DL) technique to further improve the accuracy of classification. The performance of each algorithm is evaluated using metrics such as accuracy, precision and recall. The results demonstrated that the FFNN achieves superior accuracy compared to traditional ML classifiers, indicating its effectiveness in detecting malware in Android apps.
- Research Article
22
- 10.1002/cpe.7159
- Jul 26, 2022
- Concurrency and Computation: Practice and Experience
SummaryGesture recognition is the foremost need in building intelligent human‐computer interaction systems to solve many day‐to‐day problems and simplify human life in this digital world. The traditional machine learning (ML) algorithm tried to capture specific handcrafted features, failed miserably in some real‐world environments. Deep learning (DL) techniques have become a sensation among researchers in recent years, making the traditional ML approaches quite obsolete. However, existing reviews consider only a few datasets on which DL algorithm has been applied, and the categorization of the DL algorithms is vague in their review. This study provides the precise categorization of DL algorithms and considers around 15 gesture datasets on which these techniques have been applied. This study also provides a brief overview of the numerous challenging dataset available among the research community and insight into various challenges and limitations of a DL algorithm in vision‐based dynamic gesture recognition.
- Book Chapter
5
- 10.1016/b978-0-323-90615-9.00016-5
- Jan 1, 2022
- Blockchain Applications for Healthcare Informatics
18 - 5G-enabled deep learning-based framework for healthcare mining: State of the art and challenges
- Research Article
8
- 10.1016/j.ifacol.2018.08.134
- Jan 1, 2018
- IFAC PapersOnLine
Design and performance analysis of soil temperature and humidity sensor
- Book Chapter
1
- 10.1201/9781003283195-6
- Jan 3, 2023
Software measurement (SM) is an umbrella activity during the entire software development cycle. Measurements and metrics of the attributes are indispensable for successful completion of project and effective delivery of software product. This chapter discusses SMs using deep learning (DL) techniques from the perspective of an empirical study. It is evident that an inaccurate prediction or estimation during the software development processes leads to loss of money and loss of projects. Since the beginning of software engineering, a wide range of methods are being deployed for measuring the software attributes. At present, the conventional techniques are not so apt for SMs due to excessive complex attributes of very large software. Machine learning (ML) has been the answer to all market needs in the past 30 years. It is noticed that ML is quite good to perform measurements in software engineering processes, but it is not the best method and needs enhancements. DL is the extension to ML, which is now being extensively used for SMs. The chapter begins with an introduction to ML and DL techniques and their applications in SMs empirically. Then, it highlights the literature work carried out in the field of empirical SMs using DL techniques. One of the most important DL techniques is convolutional neural networks which is discussed as a case study. This study provides a practical orientation to the readers about the implementation of DL technique to SMs.