Benford's Law in Basic RNN and Long Short‐Term Memory and Their Associations

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACTBenford's Law describes the distribution of numerical patterns, specifically focusing on the frequency of the leading digit in a set of natural numbers. It divides these numbers into nine groups based on their first digit, with the largest category comprising numbers beginning with 1, followed by those starting with 2, and so on. Each neuron within a neural network (NN) is associated with a numerical value called a weight, which is updated according to specific functions. This research examines the Degree of Benford's Law Existence (DBLE) across two language model methodologies: (1) recurrent neural networks (RNNs) and (2) long short‐term memory (LSTM). Additionally, this study investigates whether models with higher performance exhibit a stronger presence of DBLE. Two neural network language models, namely: (1) simple RNN and (2) LSTM, were selected as the subject models for the experiment. Each model is tested with five different optimizers and four different datasets (textual corpora selected from Wikipedia). This results in a total of 20 different configurations for each model. The neuron weights for each configuration were extracted at each epoch, and the following metrics were measured at each epoch: (1) DBLE, (2) training set accuracy, (3) training set error, (4) test set accuracy, and (5) test set error. The results show that the weights in both models, across all optimizers, follow Benford's Law. Additionally, the findings indicate a strong correlation between DBLE and the performance on the training set in both language models. This means that models with higher performance on the training set exhibit a stronger correlation of DBLE.

ReferencesShowing 10 of 17 papers
  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 11
  • 10.3390/e23050557
Reliability of Financial Information from the Perspective of Benford’s Law
  • Apr 30, 2021
  • Entropy
  • Ionel Jianu + 1 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 769
  • 10.1007/978-3-030-05318-5_1
Hyperparameter Optimization
  • Jan 1, 2019
  • Matthias Feurer + 1 more

  • Open Access Icon
  • Cite Count Icon 9
  • 10.1093/pubmed/fdac005
Applying Benford’s law to COVID-19 data: the case of the European Union
  • Mar 23, 2022
  • Journal of Public Health (Oxford, England)
  • Pavlos Kolias

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 13
  • 10.3390/electronics10192378
Analysis of Benford’s Law for No-Reference Quality Assessment of Natural, Screen-Content, and Synthetic Images
  • Sep 29, 2021
  • Electronics
  • Domonkos Varga

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 6
  • 10.3390/risks11070120
The Silicon Valley Bank Failure: Application of Benford’s Law to Spot Abnormalities and Risks
  • Jul 3, 2023
  • Risks
  • Anurag Dutta + 4 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 4
  • 10.3390/publications11010014
Can Retracted Social Science Articles Be Distinguished from Non-Retracted Articles by Some of the Same Authors, Using Benford’s Law or Other Statistical Methods?
  • Mar 3, 2023
  • Publications
  • Walter R Schumm + 4 more

  • Cite Count Icon 426
  • 10.1007/978-1-4842-2766-4_7
Introduction to Keras
  • Jan 1, 2017
  • Nikhil Ketkar

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 5
  • 10.3390/electronics10222768
No-Reference Video Quality Assessment Based on Benford’s Law and Perceptual Features
  • Nov 12, 2021
  • Electronics
  • Domonkos Varga

  • Open Access Icon
  • Cite Count Icon 6678
  • 10.1213/ane.0000000000002864
Correlation Coefficients: Appropriate Use and Interpretation.
  • May 1, 2018
  • Anesthesia & Analgesia
  • Patrick Schober + 2 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 12
  • 10.3390/jtaer17010016
Application of Benford’s Law on Cryptocurrencies
  • Feb 25, 2022
  • Journal of Theoretical and Applied Electronic Commerce Research
  • Jernej Vičič + 1 more

Similar Papers
  • Conference Article
  • Cite Count Icon 1
  • 10.1109/icisce.2016.195
Comparison of Various Neural Network Language Models in Speech Recognition
  • Jul 1, 2016
  • Lingyun Zuo + 2 more

In recent years, research on language modeling for speech recognition has increasingly focused on the application of neural networks. However, the performance of neural network language models strongly depends on their architectural structure. Three competing concepts have been developed: Firstly, feed forward neural networks representing an n-gram approach, Secondly, recurrent neural networks that may learn context dependencies spanning more than a fixed number of predecessor words, Thirdly, the long short-term memory (LSTM) neural networks can fully exploits the correlation on a telephone conversation corpus. In this paper, we compare count models to feed forward, recurrent, and LSTM neural network in conversational telephone speech recognition tasks. Furthermore, we put forward a language model estimation method introduced the information of history sentences. We evaluate the models in terms of perplexity and word error rate, experimentally validating the strong correlation of the two quantities, which we find to hold regardless of the underlying type of the language model. The experimental results show that the performance of LSTM neural network language model is optimal in n-best lists re-score. Compared to the first pass decoding, the relative decline in average word error rate is 4.3% when using ten candidate results to re-score in conversational telephone speech recognition tasks.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icacccn.2018.8748691
Meta-heuristic based Optimized Deep Neural Network for Streaming Data Prediction
  • Oct 1, 2018
  • Puneet Kumar + 1 more

The categories and quantity of data are expanding exponentially with the on-going wave of connectivity. A number of connected devices and data sources continuously generate a huge amount of data at a very high speed. This paper investigates various methods such as - Naive Bayes classifier, Very Fast Decision Trees (VFDT), ensemble methods, clustering based methods, etc. that have been used for streaming data processing. In this paper, recurrent neural network (RNN) is implemented topredict the next sequence of a data stream. Three types of sequential data streams are considered - uniform rectangular data, uniform sinusoidal data and non-uniform sinc pulse data. Various RNN architectures such as - simple RNN, RNN with long short term memory (LSTM), RNN with gated recurrent units (GRU) and RNN optimized with Genetic Algorithm (GA) are implemented for various combinations of number network hyper-parameters such as –number of hidden layers, number of neurons per layer, activation function and optimizer etc. The optimal combination of the hyper-parameters is selected using GA. With sample data streams, simple RNN shows better prediction accuracy than LSTM and GRU for single hidden layer architecture. As the RNN architectures get deeper, LSTM and GRU outperform simple RNN. The optimized version of RNN has been experimentally observed to be 78.13% faster than single layered LSTM architecture and 82.76% faster than the LSTM model with 4 hidden layers. The decline in accuracy is 8.67% and 12.67% respectively.

  • Research Article
  • Cite Count Icon 1
  • 10.1049/cje.2019.03.015
Language Model Score Regularization for Speech Recognition
  • May 1, 2019
  • Chinese Journal of Electronics
  • Yike Zhang + 2 more

Inspired by the fact that back-off and interpolated smoothing algorithms have significant effect on statistical language modeling, this paper proposes a sentence-level Language model (LM) score regularization algorithm to improve the fault-tolerance of LMs for recognition errors. The proposed algorithm is applicable to both count-based LMs and neural network LMs. Instead of predicting the occurrence of a sequence of words under a fixed order Markov assumption, we use a composite model consisting of different order models with either n-gram or skip-gram features to estimate the probability of the sequence of words. In order to simplify implementations, we derive a connection between bidirectional neural networks and the proposed algorithm. Experiments were carried out on the Switchboard corpus. Results on N-best lists re-scoring show that the proposed algorithm achieves consistent word error rate reduction when it is applied to count-based LMs, Feedforward neural network (FNN) LMs, and Recurrent neural network (RNN) LMs.

  • Research Article
  • 10.54254/2754-1169/105/20241990
Predicting CSI 300 Index and NASDAQ Index by Simple RNN and LSTM
  • Sep 11, 2024
  • Advances in Economics, Management and Political Sciences
  • Bowen Lu

Stocks are an important part of the financial market, and their prices can reflect the economic level of a country. It is significant to predict the trends of stocks. With high noise, non-linearity and other complex features, stock systems are hard to be predicted accurately by traditional statistics models and deep learning methods are suitable to be used for stock predictions. In this study, CSI 300 index and NASDAQ index are selected as research targets in this study. Considering one model cannot fit all of the stocks, Simple Recurrent Neural Networks (RNNs) model and its variant model, Long-Short Term Memory (LSTM) model are chosen as two main methods for forecasting these data, and their prediction results will be compared to determine which model fits better. For assessment indicators, graphs and root mean square error (RMSE) can evaluate the accuracy both visually and numerically of prediction results. Experimental results show that simple RNN and LTSM predicts better for CSI 300 index than NASDAQ index. Both simple RNN and LSTM cannot perform well in the test set of NASDAQ index. High discreteness and sudden changes of NASDAQ index may be potential reasons.

  • Research Article
  • 10.55041/ijsrem16617
Comparative Analysis of Deep Learning Approaches for Twitter Text Classification
  • Oct 21, 2022
  • INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Lukesh Kadu

Abstract—Sentiment analysis (also known as opinion mining or emotion AI) is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to voice of the customer materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from marketing to customer service to clinical medicine. With the rise of deep language models, such as RoBERTa, also more difficult data domains can be analyzed, e.g., news texts where authors typically express their opinion/sentiment less explicitly. Sentiment analysis aims to extract opinion automatically from data and classify them as positive and negative. Twitter widely used social media tools, been seen as an important source of information for acquiring people’s attitudes, emotions, views, and feedbacks. Within this context, Twitter sentiment analysis techniques were developed to decide whether textual tweets express a positive or negative opinion. In contrast to lower classification performance of traditional algorithms, deep learning models, including Convolution Neural Network (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM), have achieved a significant result in sentiment analysis. Keras is a Deep Learning (DL) framework that provides an embedding layer to produce the vector representation of words present in the document. The objective of this work is to analyze the performance of deep learning models namely Convolutional Neural Network (CNN), Simple Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM), bidirectional Long Short-Term Memory (Bi-LSTM), BERT and RoBERTa for classifying the twitter reviews. From the experiments conducted, it is found that RoBERTa model performs better than CNN and simple RNN for sentiment classification. Keywords—Convolution Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Deep Learning, Bidirectional Long Short-Term Memory (BiLSTM), Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pre-training Approach (RoBERTa).

  • Book Chapter
  • Cite Count Icon 6
  • 10.1016/b978-0-12-822295-9.00013-3
CHAPTER 3 - Recurrent neural network: application in facies classification
  • Jan 1, 2022
  • Advances in Subsurface Data Analytics
  • Miao Tian + 1 more

CHAPTER 3 - Recurrent neural network: application in facies classification

  • Research Article
  • Cite Count Icon 52
  • 10.1080/10916466.2021.2003386
Predictive modeling of drilling rate index using machine learning approaches: LSTM, simple RNN, and RFA
  • Nov 7, 2021
  • Petroleum Science and Technology
  • Niaz Muhammad Shahani + 3 more

Drilling rate index (DRI) is a fundamental parameter in the investigation of rock drillability, as drillability is considered one of the main problems in rock engineering. Several researchers have continuously tried to analyze and correlate rock DRI, but the problem remains unchanged. This study elucidates the machine learning approaches, namely long short term memory (LSTM), simple recurrent neural network (RNN) and random forest algorithm (RFA) to predict DRI of rocks using multivariate inputs, that is, uniaxial compressive strength in MPa; Brazilian tensile strength (BTS) in MPa; brittleness value (S20); Sievers’ J value (S j); modulus ratio (MR); shore hardness (SH), porosity (n) in %; shimazeks F abrasitivity in N/mm; and equivalent quartz content in %. For all proposed methods, the original dataset was divided into 70% for training and the remaining 30% for testing. Next, the performance indices, such as correlation coefficient (R 2), root mean square error (RMSE), variance accounts for (VAF) and a-20 index of each proposed method were determined to examine the accuracy of the predicted data. In this study, according to the results of LSTM, simple RNN and RFA methods, the LSTM revealed the best prediction output for DRI with the strongest R 2, the lowest RMSE, the largest VAF and an appropriate a-20 index values as 0.999, 0.13416, 0.997, and 0.999 in the training stage and 0.998, 0.19479, 0.996, and 0.997 in the testing stage, respectively. Therefore, LSTM is an applicable machine learning approach that can be applied to accurately predict the DRI.

  • Front Matter
  • 10.1111/exsy.12946
COVID-19 special issue: Intelligent solutions for computer communication-assisted infectious disease diagnosis.
  • Feb 24, 2022
  • Expert systems
  • Fadi Al‐Turjman

COVID-19 special issue: Intelligent solutions for computer communication-assisted infectious disease diagnosis.

  • Research Article
  • Cite Count Icon 51
  • 10.4018/ijse.2018010103
Sentiment Analysis in the Light of LSTM Recurrent Neural Networks
  • Jan 1, 2018
  • International Journal of Synthetic Emotions
  • Subarno Pal + 2 more

Long short-term memory (LSTM) is a special type of recurrent neural network (RNN) architecture that was designed over simple RNNs for modeling temporal sequences and their long-range dependencies more accurately. In this article, the authors work with different types of LSTM architectures for sentiment analysis of movie reviews. It has been showed that LSTM RNNs are more effective than deep neural networks and conventional RNNs for sentiment analysis. Here, the authors explore different architectures associated with LSTM models to study their relative performance on sentiment analysis. A simple LSTM is first constructed and its performance is studied. On subsequent stages, the LSTM layer is stacked one upon another which shows an increase in accuracy. Later the LSTM layers were made bidirectional to convey data both forward and backward in the network. The authors hereby show that a layered deep LSTM with bidirectional connections has better performance in terms of accuracy compared to the simpler versions of LSTM used here.

  • Conference Article
  • Cite Count Icon 13
  • 10.1109/lwmoocs50143.2020.9234363
Algorithms for the Development of Deep Learning Models for Classification and Prediction of Behaviour in MOOCS
  • Sep 29, 2020
  • Jose Edmond Meku Fotso + 3 more

MOOCs (Massive Open Online Courses) are definitely one of the best approach to support the international agenda about inclusive and equitable education and lifelong learning opportunities for all (SDG4) [1]. A great deal universities and institutions offer valuable free courses to their numerous students and to people around the word through MOOC platforms. However, because of huge number of learners and data generated, learner’s behaviour in those platforms remain a kind of black box for learners themselves and for courses instructors who are supposed to tutor or monitor learners in the learning process. Therefore, learner do not receive sufficient support from instructors and from their peers, during the learning process [2]. This is one the main reasons that lead to high dropout, low completion and success rate observed in the MOOCs. Many research work have suggested descriptive, predictive and prescriptive models to address this issue, but most of these models focus on predicting dropout, completion and/or success, and do not generally pay enough attention to one of the key step (learner behaviour), that comes before, and can explain dropping out and failure. Our research aims to develop a deep learning model to predict learner behaviour (learner interactions) in the learning process, in order to equip learners and course instructors with insight understanding of the learner behaviour in the learning process. This specific paper will focus on analysing relevant algorithms to develop such model. For this analysis, we used data from UNESCO-IICBA (International Institute for Capacity Building in Africa) MOOC platform, designed for teacher training in Africa, and then we examine many types of features: geographical, social behavioural and learning behavioural features. Learner’s behaviour being a time series Big data, we built the predictive model using Deep Learning algorithms for better performance and accuracy (Thanks to the power of deep learning) compared to baseline Machine learning algorithms. Time series data is best handled by recurrent neural networks (RNN) [3], so, we choose RNN and implemented/tested the three main architectures of RNN: Simple RNNs, GRU (Gated Recurrent Unit) RNNs and LSTM (Long short-term memory) RNNs. The models were trained using L2 Regularization, based on the predictions results, we unexpectedly found model with simple RNNs produced the best performance and accuracy on the dataset used than the other RNN architectures. We had couple of observations, example: we saw a correlation between video viewing and quiz behaviour and the participation of the learner to the learning process. This observation could allow teachers to provide meaningful support and guidance to at risk or less active students. We also observed that, the shorter the video or the quiz, the more the viewer. We conclude that we could use learner video or quiz viewing behaviour to predict his behaviour concerning other MOOC contents. These results suggest the need of deeper research on educational video and educational quiz designing for MOOCs.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.5194/piahs-387-17-2024
A hybrid approach to enhance streamflow simulation in data-constrained Himalayan basins: combining the Glacio-hydrological Degree-day Model and recurrent neural networks
  • Nov 18, 2024
  • Proceedings of IAHS
  • Dinesh Joshi + 3 more

Abstract. The Glacio-hydrological Degree-day Model (GDM) is a distributed model, but it is prone to uncertainties due to its conceptual nature, parameter estimation, and limited data in the Himalayan basins. To enhance accuracy without sacrificing interpretability, we propose a hybrid model approach that combines GDM with recurrent neural networks (RNNs), hereafter referred to as GDM–RNN. Three RNN types – a simple RNN model, a gated recurrent unit (GRU) model, and a long short-term memory (LSTM) model – are integrated with GDM. Rather than directly predicting streamflow, RNNs forecast GDM's residual errors. We assessed performance across different data availability scenarios, with promising results. Under limited-data conditions (1 year of data), GDM–RNN models (GDM–simple RNN, GDM–LSTM, and GDM–GRU) outperformed standalone GDM and machine learning models. Compared with GDM's respective Nash–Sutcliffe efficiency (NSE), R2, and percent bias (PBIAS) values of 0.80, 0.63, and −4.78, the corresponding values for the GDM–simple RNN were 0.85, 0.82, and −6.21; for GDM–LSTM, they were 0.86, 0.79, and −6.37; and for GDM–GRU, they were 0.85, 0.8, and −5.64. Machine learning models yielded similar results, with the simple RNN at 0.81, 0.7, and −16.6; LSTM at 0.79, 0.65, and −21.42; and GRU at 0.82, 0.75, and −12.29, respectively. Our study highlights the potential of machine learning with respect to enhancing streamflow predictions in data-scarce Himalayan basins while preserving physical streamflow mechanisms.

  • Research Article
  • 10.1121/1.4969753
Optimization of topic estimation for the domain adapted neural network language model
  • Oct 1, 2016
  • Journal of the Acoustical Society of America
  • Aiko Hagiwara + 5 more

We present a neural network language model adapted for topics fluctuating in broadcast programs. Topic adapted n-gram language models constructed by using latent Dirichlet allocation for topic estimation are widely used. The conventional method estimates topics by separating the corpora into chunks that have few sentences. While the n-gram model uses several preceding words, the recurrent neural network and long short-term memory can learn to store huge amounts of past information in the hidden layers. Consequently, chunks for language models trained by using neural networks may have a longer optimal length than the chunks for language models trained by using the conventional methods. In this paper, the length of chunks and topic estimation process are optimized for the neural network language models. For the topic estimation, k-mean clustering, latent Dirichlet allocation, and word2vec were compared. On the basis of the results of comparison, we designed a neural network language model.

  • Conference Article
  • Cite Count Icon 18
  • 10.1109/cccs.2018.8586826
Comparison of algorithms in Foreign Exchange Rate Prediction
  • Oct 1, 2018
  • Swagat Ranjit + 3 more

Foreign currency exchange plays a vital role for trading of currency in the financial market. Due to its volatile nature, prediction of foreign currency exchange is a challenging task. This paper presents different machine learning techniques like Artificial Neural Network (ANN), Recurrent Neural Network (RNN) to develop prediction model between Nepalese Rupees against three major currencies Euro, Pound Sterling and US dollar. Recurrent Neural Network is a type of neural network that have feedback connections. In this paper, prediction model were based on different RNN architectures, feed forward ANN with back propagation algorithm and then compared the accuracy of each model. Different ANN architecture models like Feed forward neural network, Simple Recurrent Neural Network (SRNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM) were used. Input parameters were open, low, high and closing prices for each currency. From this study, we have found that LSTM networks provided better results than SRNN and GRU networks.

  • Conference Article
  • Cite Count Icon 15
  • 10.1109/icci51257.2020.9247757
A Review of Weight Optimization Techniques in Recurrent Neural Networks
  • Oct 8, 2020
  • Alawi Alqushaibi + 3 more

Recurrent neural network (RNN) has gained much attention from researchers working in the domain of time series data processing and proved to be an ideal choice for processing such data. As a result, several studies have been conducted on analyzing the time series data and data processing through a variety of RNN techniques. However, every type of RNN has its own flaws. Simple Recurrent Neural Networks (SRNN) are computationally less complex than other types of RNN such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). However, SRNN has some drawbacks such as vanishing gradient problem that makes it difficult to train when dealing with long term dependencies. The vanishing gradient exists during the training process of SRNN due to the multiplication of the gradient with small value when using the most traditional optimization algorithm the Gradient Decent (GD). Therefore, researches intend to overcome such limitations by utilizing weight optimized techniques such as metaheuristic algorithms. The objective of this paper is to present an extensive review of the challenges and issues of RNN weight optimization techniques and critically analyses the existing proposed techniques. The authors believed that the conducted review would serve as a main source of the techniques and methods used to resolve the problem of RNN time series data and data processing. Furthermore, current challenges and issues are deliberated to find promising research domains for further study.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1201/9781003277224-8
Recurrent Neural Networks and Their Application in Seizure Classification
  • Aug 15, 2022
  • Kusumika Krori Dutta + 2 more

Deep learning (DL) architectures such as deep neural networks (DNN), deep belief networks (DBN), recurrent neural networks(RNN) and convolutional neural networks (CNN) have been applied to applications such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioin-formatics, drug design, medical image analysis, material inspection and board game programs, in which has comparable performance than human experts. With the growing interest and research in the area of artificial neural network, deep neural network enable computers to get trained for error-free diagnosis to diseases like epilepsy. In literature, researchers carried out many mathematical models for pre-processing of EEG data and classification between seizure and seizure free signals or different 166types of network disorders. The introduction of various algorithms like machine learning deep learning, etc., in artificial intelligence, aids to classify the data with or without pre-processing and two class system. It is important to try multi-class time series classification of various brain activities (tumour, network disorders) using the sophisticated algorithms. In this chapter, different deep learning algorithms for multiclass, time series classification of different electrical activities in brain are discussed. The main focus is on the application of different RNN models in seizure classification of Electroencephalogram (EEG) signals. It is very important to interpret the 1D EEG signals and classify among different activities of brain for various diagnostic purpose. The fully interconnected hidden configuration of recurrent neural network (RNN) makes the model very dominant which enable to discover temporal correlations between far away events in the data. The training of RNN architecture when used in deep network is challenging because of vanishing/exploding gradient in deeper layer. This paper aims to perform multiclass time series classification of EEG signal using three different RNN techniques; simple Recurrent Neural Network, Long-Short Term Memory (LSTM) and GRUs. A comparative study between RNNs is done in terms of configuration, time taken and accuracy for EEG signals acquired from people having different pathological and physiological brain states. The accuracy and time taken for multilayer recurrent neural networks are determined for classification of EEG for five different classes using three different types of RNN networks, for 1 to 1024 units with 100 epochs and 5 different layers of 32 cells with 300 epochs, with a learning rate of 0.01. It has been observed that the number of layers increases the time complexity and provides constant accuracy for more than three layers. Further, it can be extended for the accuracy and time consumption for different batch sizes with different epochs to fix a proper network without over fitting the network.

More from: Applied AI Letters
  • New
  • Research Article
  • 10.1002/ail2.70012
Automated AI ‐Based Lung Disease Classification Using Point‐of‐Care Ultrasound
  • Nov 4, 2025
  • Applied AI Letters
  • Nixson Okila + 9 more

  • New
  • Research Article
  • 10.1002/ail2.70010
Efficient Few‐Shot Learning in Remote Sensing: Fusing Vision and Vision‐Language Models
  • Nov 2, 2025
  • Applied AI Letters
  • Jia Yun Chua + 2 more

  • Journal Issue
  • 10.1002/ail2.v6.3
  • Oct 1, 2025
  • Applied AI Letters

  • Research Article
  • 10.1002/ail2.70007
Multi‐Objective Reinforcement Learning for Automated Resilient Cyber Defence
  • Sep 5, 2025
  • Applied AI Letters
  • Ross O'Driscoll + 3 more

  • Research Article
  • 10.1002/ail2.70005
Classical Machine Learning Approaches for Early Hypertension Risk Prediction: A Systematic Review
  • Aug 29, 2025
  • Applied AI Letters
  • Abebaw Agegne Engda + 2 more

  • Research Article
  • 10.1002/ail2.70004
Tsetse Fly Detection and Sex Classification Model Enrichment Employing YOLOv8 and YOLO11 Architecture
  • Aug 26, 2025
  • Applied AI Letters
  • Wegene Demisie Jima + 5 more

  • Research Article
  • 10.1002/ail2.127
Thematic Analysis of Expert Opinions on the Use of Large Language Models in Software Development
  • Aug 12, 2025
  • Applied AI Letters
  • Sargam Yadav + 2 more

  • Research Article
  • 10.1002/ail2.70002
Benford's Law in Basic RNN and Long Short‐Term Memory and Their Associations
  • Jul 29, 2025
  • Applied AI Letters
  • Farshad Ghassemi Toosi

  • Research Article
  • 10.1002/ail2.70001
Utilizing AI in Business and Entrepreneurship: Implications for Complex Decision‐Making in Engineering and Product Development Settings
  • Jul 29, 2025
  • Applied AI Letters
  • Nnamdi Gabriel Okafor + 1 more

  • Research Article
  • 10.1002/ail2.70003
Time Variant Node Ranking Technique for Chatbot Neural Graph
  • Jul 27, 2025
  • Applied AI Letters
  • Ahmed Imtiaz + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon