Deep Neural Network-Based Classification Method for University English Multimedia Course Resources
With the fast increase of virtual coaching assets, it will become especially essential to classify and manipulate those sources efficiently, specifically within the discipline of college English coaching. Traditional class methods are frequently difficult to deal with the complexity and diversity of multimedia coaching resources, which prompts us to explore extra advanced solutions. This paper proposes a hybrid deep learning model combining a Long Short-Term Memory (LSTM) and format interest community (GAT) to correctly classify college English multimedia path resources. The LSTM phase is designed to capture time series dependencies in direction resources, whilst the GAT section is used to recognize complex relationships among assets. In a series of experiments, our model shows better class accuracy than the conventional techniques and verifies the effectiveness of the combination of LSTM and GAT in managing the classes of educational resources. Further, the successful implementation of the version offers precious insights for future academic aid management and personalized studying route recommendations. This study not only promotes the software for deep learning knowledge within the area of educational generation, but also opens up a brand-new way for the green control and utilization of university English multimedia coaching assets.
- Research Article
38
- 10.1016/j.measurement.2021.109545
- May 8, 2021
- Measurement
Rock mass type prediction for tunnel boring machine using a novel semi-supervised method
- Book Chapter
1
- 10.1007/978-981-16-0708-0_3
- Jan 1, 2021
In this paper, the primary focus is of Slot Tagging of Gujarat Dialogue, which enables the Gujarati language communication between human and machine, allowing machines to perform given task and provide desired output. The accuracy of tagging entirely depends on bifurcation of slots and word embedding. It is also very challenging for a researcher to do proper slot tagging as dialogue and speech differs from human to human, which makes the slot tagging methodology more complex. Various deep learning models are available for slot tagging for the researchers, however, in the instant paper it mainly focuses on Long Short-Term Memory (LSTM), Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) and Long Short-Term Memory – Conditional Random Field (LSTM-CRF), Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network - Bidirectional Long Short-Term Memory (CNN-BiLSTM) and Bidirectional Long Short-Term Memory – Conditional Random Field (BiLSTM-CRF). While comparing the above models with each other, it is observed that BiLSTM models performs better than LSTM models by a variation ~2% of its F1-measure, as it contains an additional layer which formulates the word string to traverse from backward to forward. Within BiLSTM models, BiLSTM-CRF has outperformed other two Bi-LSTM models. Its F1-measure is better than CNN-BiLSTM by 1.2% and BiLSTM by 2.4%.KeywordsSpoken Language Understanding (SLU)Long Short-Term Memory (LSTM)Slot taggingBidirectional Long Short-Term Memory (BiLSTM)Convolutional Neural Network - Bidirectional Long Short-Term Memory (CNN-BiLSTM)Bidirectional Long Short-Term Memory (BiLSTM-CRF)
- Preprint Article
2
- 10.7287/peerj.preprints.26589v1
- Mar 1, 2018
Background. Automatic contradiction detection or conflicting statements detection in text consists of identifying discrepancy, inconsistency and defiance in text and has several real world applications in questions and answering systems, multi-document summarization, dispute detection and finder in news, and detection of contradictions in opinions and sentiments on social media. Automatic contradiction detection is a technically challenging natural language processing problem. Contradiction detection between sources of text or two sentence pairs can be framed as a classification problem. Methods. We propose an approach for detecting three different types of contradiction: negation, antonyms and numeric mismatch. We derive several linguistic features from text and use it in a classification framework for detecting contradictions. The novelty of our approach in context to existing work is in the application of artificial neural networks and deep learning. Our approach uses techniques such as Long short-term memory (LSTM) and Global Vectors for Word Representation (GloVe). We conduct a series of experiments on three publicly available dataset on contradiction detection: Stanford dataset, SemEval dataset and PHEME dataset. In addition to existing dataset, we also create more dataset and make it publicly available. We measure the performance of our proposed approach using confusion and error matrix and accuracy. Results. There are three feature combinations on our dataset: manual features, LSTM based features and combination of manual and LSTM features. The accuracy of our classifier based on both LSTM and manual features for the SemEval dataset is 91.2%. The classifier was able to correctly classify 3204 out of 3513 instances. The accuracy of our classifier based on both LSTM and manual features for the Stanford dataset is 71.9%. The classifier was able to correctly classify 855 out of 1189 instances. The accuracy for the PHEME dataset is the highest across all datasets. The accuracy for the contradiction class is 96.85%. Discussion. Experimental analysis demonstrate encouraging results proving our hypothesis that deep learning along with LSTM based features can be used for identifying contradictions in text. Our results shows accuracy improvement over manual features after applying LSTM based features. The accuracy results varies across datasets and we observe different accuracy across multiple types of contradictions. Feature analysis shows that the discriminatory power of the five feature varies.
- Research Article
- 10.55041/ijsrem16617
- Oct 21, 2022
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Abstract—Sentiment analysis (also known as opinion mining or emotion AI) is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to voice of the customer materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from marketing to customer service to clinical medicine. With the rise of deep language models, such as RoBERTa, also more difficult data domains can be analyzed, e.g., news texts where authors typically express their opinion/sentiment less explicitly. Sentiment analysis aims to extract opinion automatically from data and classify them as positive and negative. Twitter widely used social media tools, been seen as an important source of information for acquiring people’s attitudes, emotions, views, and feedbacks. Within this context, Twitter sentiment analysis techniques were developed to decide whether textual tweets express a positive or negative opinion. In contrast to lower classification performance of traditional algorithms, deep learning models, including Convolution Neural Network (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM), have achieved a significant result in sentiment analysis. Keras is a Deep Learning (DL) framework that provides an embedding layer to produce the vector representation of words present in the document. The objective of this work is to analyze the performance of deep learning models namely Convolutional Neural Network (CNN), Simple Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM), bidirectional Long Short-Term Memory (Bi-LSTM), BERT and RoBERTa for classifying the twitter reviews. From the experiments conducted, it is found that RoBERTa model performs better than CNN and simple RNN for sentiment classification. Keywords—Convolution Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Deep Learning, Bidirectional Long Short-Term Memory (BiLSTM), Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pre-training Approach (RoBERTa).
- Research Article
- 10.57197/jdr-2025-0590
- Jan 1, 2025
- Journal of Disability Research
Wearable electroencephalography (EEG) devices for seizure detection accuracy and reliability are deep learning (DL) applications in the field of epilepsy diagnosis. In this study, we sought to increase the accuracy of seizure detection using advanced DL algorithms on the Children’s Hospital Boston - Massachusetts Institute of Technology (CHB-MIT) EEG database. First, a fully convolutional network (FCN) was trained and assessed using accuracy and recall/precision metrics, and the early stopping technique was used to avoid overfitting. To assess the performance, the FCN was evaluated in terms of various metrics, including accuracy, precision, recall, F1-score, and receiver operating characteristic (ROC)-area under the curve (AUC). In addition, two-dimensional (2D) convolutional neural networks (CNNs) and long short-term memory (LSTM) models were used to model the database, and their performance was thoroughly measured using different metrics, graphs, and confusion matrices. Using LSTM variants, such as standard LSTM, bidirectional LSTM, stacked LSTM, and LSTM attention mechanisms, hybrid convolutional LSTM (ConvLSTM) models were trained and compared. The comparison was conducted based on the training and validation accuracy and loss, as well as the graphs resulting from the precision–recall curves. Apart from DL approaches, EEG signal analysis using time–frequency techniques, such as wavelet transform and short-time Fourier transform, has also been investigated. These methods assisted in the analysis of the time–frequency features of EEG signals in combination with DL models. This study demonstrates that the performance of wearable EEG devices can be augmented using a combination of DL and seizure signal processing techniques. The FCN achieved an accuracy of 92%, with a recall for seizures of 33%, an F1-score of 0.03, and strong ROC-AUC results. The 2D CNN achieved 96% accuracy, a seizure recall of 70%, an F1-score of 0.12, and an ROC-AUC score of 78%. The baseline LSTM struggled with effectiveness at 53% accuracy with a seizure recall of 18%. In contrast, the LSTM model, which incorporated synthetic minority oversampling technique (SMOTE) balancing, was able to reach improvements of up to 89% accuracy, with a precision of 91%, a recall of 86%, an F1-score of 0.89, and a strong ROC curve performance. Among the models, the LSTM with SMOTE was the best performer, with 89% accuracy, 91% precision, 86% recall, and an F1-score of 0.89. These results provide evidence that applying techniques for data balancing in combination with certain DL network architectures significantly improves the detection of seizures using wearable EEG devices worn on the body. We believe that real-time monitoring and high-performance systems are feasible using optimized DL frameworks. The analysis of the performance of different models allows for the understanding of the possibilities of optimizing the architectures of DL algorithms for the modern diagnosis of epilepsy in real time. The source code used to carry out the experiments is publicly available at CHB-MIT EEG Dataset Python Scripts (https://www.kaggle.com/code/adnankust/adnaneeg1).
- Conference Article
22
- 10.1109/icspis51611.2020.9349573
- Dec 23, 2020
Network traffic forecasting means estimating future network traffic from previous traffic observations. Network traffic analysis has various applications in a wide range of fields, and considerable research attention has been paid to this area in recent years. Accurate forecasting of network traffic plays an important role in network management and improving the Quality Of Services (QoS). For this purpose, various techniques have been applied such as neural network-based methods and data mining methods. This paper has concentrated on examining various methods of analyzing and forecasting network traffic based on deep learning. Therefore several Recurrent Neural Network (RNN) models such as Random Connectivity Long Short-Term Memory (RCLSTM), Gated Recurrent Unit (GRU), and some Feed Forward Neural Networks (FFNN) like Multi-Layer Perceptron (MLP) and Convolutional Neural Network (CNN) have been studied. Also, the combination of Long Short-Term Memory (LSTM) and MLP as a new method have been proposed. The simulation results have been implemented in Python and compared with other previous algorithms, which shows the high effectiveness and performance of the new approach.
- Conference Article
- 10.1109/iccci56745.2023.10128350
- Jan 23, 2023
The main plot of this project is to compare different methods that primarily finds the stock prediction. There are so many machine learning and neurocomputing fields to predict the stock values. Stock price forecasting uses machine learning effectively. Various learning methods are available, including Moving Average (MA), K-nearest Neighbors (KNN), LSTM (Long Short Term Memory), and ARIMA.. LSTM is a type of ANN and RNN neural networks. Because in deep learning they are capable of storing the data in memory. But out of this Long Short-Term Memory(LSTM) is unique compares with other. Because it is used to create long term memory. LSTM performs better in big datasets. It has more additional space to store longer information and it stores longer period of time. While compare to other techniques they can’t store more data like LSTM does. In LSTM we use visualization method so, it is easy to compare the data and find the accuracy value. we used apple and google company datasets to perform LSTM. And here in the papers which we used as reference they performed on datasets namely- Chinese stock market data, Yahoo finance dataset(900000), BSE stocks Tick data, LM SCG, 4 year data of NASDAQ,100 stock market data of NASDAQ stocks. LSTM method gave us best accurate value so we choose LSTM method from others. The main objective of this paper is to find the accurate value of stock market using machine learning through best method out of all available. so we choose LSTM method.
- Research Article
- 10.30574/wjarr.2022.14.1.0323
- Apr 30, 2022
- World Journal of Advanced Research and Reviews
Deep learning's and the internet of things' revolutionary capabilities are generating a dramatic shift in the healthcare sector. An examination of the possible monetary chaos that may ensue from healthcare systems implementing deep learning for patient identification and monitoring through the internet of things is undertaken in this study. Wearable sensors, smart devices, and internet-connected medical equipment have made it possible for medical personnel to monitor their patients' respirations, heart rates, and other physiological indicators in real time. But the massive amounts of complicated data produced by these gadgets make analysis and diagnosis difficult. Deep learning algorithms do a great job of sifting through this ever-growing heap of medical records. Data collected from sensors, electronic health records (EHRs), and patient reports can be automatically analyzed for complex patterns and relationships using Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. Clinicians can use this ability to better diagnose patients, identify warning signals, and tailor therapies to each individual's needs. This study presents the specifics of an Internet of Things (IoT) healthcare system that employs convolutional neural networks (CNNs) and long short-term memories (LSTM) for tasks such as feature extraction, data classification, prediction, and development. When it comes to healthcare settings, using real-time updated deep learning models raises questions about interpretability, privacy, and accessible resources. The study demonstrates that the Internet of Things (IoT)—specifically, Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM)—can enhance healthcare. These systems enable optimization of therapies, real-time diagnosis of diseases, and risk predictions. Healthcare that is both accessible and inexpensive can improve with the application of these ideas. Through networked devices and sophisticated analytics, the combination of the Internet of Things (IoT), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) can significantly improve disease detection, individualized therapy, and patient monitoring.
- Book Chapter
- 10.1007/978-981-16-9131-7_5
- Oct 20, 2022
In this chapter, the data-driven RUL prediction methods for mechanical systems are presented. Since the deep learning algorithm has shown remarkbale advantages on prognosis problems in the current literature, the neural network-based methods are focused on in this chapter. First, the deep separable convolutional neural network-based RUL prediction method is introduced, which establishes a direct mapping relationship between raw monitoring data and RUL by implementing separable convolution and constructing information refinement units. Next, the recurrent convolutional neural network-based RUL prediction method is illustrated. A network with temporal memory capability is constructed using recurrent connections and gating mechanisms. At last, we present a multi-scale convolutional attention network-based RUL prediction method. By the integration of multi-scale representation learning strategy, the degradation information of the mechanical system can be extracted in different time scales. Throughout this chapter, experiments on multiple run-to-failure datasets are carried out, which validate the effectiveness of the presented methods.
- Conference Article
- 10.1109/iccct53315.2021.9711849
- Dec 16, 2021
This paper explains prediction of share market trends of organizations using Artificial Neural Network (ANN). The Long Short Term Memory (LSTM) incorporated with a simple neural network gives the result of the movement of company's stock prices in the share market. LSTM is used for processing the time-series data. LSTM is a type of Recurrent Neural Network (RNN). In this work, layers of LSTM networks called stacked LSTM is a core component that process the huge volume of time series data. LSTM model works like a human brain because of the power to have a short term and long term memory. During data processing in the training stage, the model keeps a short term memory of the relation between the date and stock prices which is available in the data. It then starts keeping track of the relations from the successive dates and stock prices since the inception of the company. In this stage, the model tries to find a pattern or a trend in the stock price movement. This is kept in the long term memory. As the model processes further data, it finds an accurate pattern in the stock price movement. The exact date or a number of days is given as input and the stock price is given as output from the model
- Research Article
3
- 10.1007/s11042-024-19919-w
- Jul 30, 2024
- Multimedia Tools and Applications
Deep learning and the Internet of Things (IoT) are revolutionizing the healthcare industry. This study explores the potential commercial transformation resulting from IoT-enabled healthcare systems that use deep learning for patient monitoring and diagnosis. Wearables, smart sensors, and internet-connected medical devices allow doctors to monitor patients' vital signs, activities, and physiological traits in real time. However, these devices generate vast and complex data, making analysis and diagnosis challenging. Deep learning models are well-suited to analyze this growing volume of medical data. Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks can automatically recognize complex patterns and relationships in sensor data, electronic health records, and patient-reported information. This capability aids clinical professionals in diagnosing illnesses, identifying warning signs, and tailoring treatments. This paper describes a Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) -based IoT-enabled healthcare system that performs feature extraction, classification, prediction, and data preparation. Additionally, it addresses interpretability issues, privacy concerns, and resource limitations of deep learning models in real-time healthcare settings. The study demonstrates the effectiveness of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) -powered IoT-based healthcare solutions, such as real-time patient monitoring, disease detection, risk prediction, and therapy optimization. These techniques can improve the quality, cost, and outcomes of healthcare. Combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) with IoT can significantly enhance healthcare by improving disease detection, personalized treatment, and patient monitoring through connected devices and powerful analytics.
- Research Article
15
- 10.3390/biology11020169
- Jan 21, 2022
- Biology
Simple SummaryForecasting dengue cases often face challenges from (1) time-effectiveness due to time-consuming satellite data downloading and processing, (2) weak spatial representation due to data dependence on administrative unit-based statistics or weather station-based observations, and (3) stagnant accuracy without historical dengue cases. With the advance of the geospatial big data cloud computing in Google Earth Engine and deep learning, this study proposed an efficient framework of dengue prediction at an epidemiological week basis using geospatial big data analysis in Google Earth Engine and Long Short Term Memory modeling. We focused on the dengue epidemics in the Federal District of Brazil during 2007–2019. Based on Google Earth Engine and epidemiological calendar, we computed the weekly composite for each dengue driving factor, and spatially aggregated the pixel values into dengue transmission areas to generate the time series of driving factors. A multi-step-ahead Long Short Term Memory modeling was used, and the time-differenced natural log-transformed dengue cases and the time series of driving factors were considered as outcomes and explantary factors, respectively, with two modeling scenarios (with and without historical cases). The performance is better when historical cases were used, and the 5-weeks-ahead forecast has the best performance.Timely and accurate forecasts of dengue cases are of great importance for guiding disease prevention strategies, but still face challenges from (1) time-effectiveness due to time-consuming satellite data downloading and processing, (2) weak spatial representation capability due to data dependence on administrative unit-based statistics or weather station-based observations, and (3) stagnant accuracy without the application of historical case information. Geospatial big data, cloud computing platforms (e.g., Google Earth Engine, GEE), and emerging deep learning algorithms (e.g., long short term memory, LSTM) provide new opportunities for advancing these efforts. Here, we focused on the dengue epidemics in the urban agglomeration of the Federal District of Brazil (FDB) during 2007–2019. A new framework was proposed using geospatial big data analysis in the Google Earth Engine (GEE) platform and long short term memory (LSTM) modeling for dengue case forecasts over an epidemiological week basis. We first defined a buffer zone around an impervious area as the main area of dengue transmission by considering the impervious area as a human-dominated area and used the maximum distance of the flight range of Aedes aegypti and Aedes albopictus as a buffer distance. Those zones were used as units for further attribution analyses of dengue epidemics by aggregating the pixel values into the zones. The near weekly composite of potential driving factors was generated in GEE using the epidemiological weeks during 2007–2019, from the relevant geospatial data with daily or sub-daily temporal resolution. A multi-step-ahead LSTM model was used, and the time-differenced natural log-transformed dengue cases were used as outcomes. Two modeling scenarios (with and without historical dengue cases) were set to examine the potential of historical information on dengue forecasts. The results indicate that the performance was better when historical dengue cases were used and the 5-weeks-ahead forecast had the best performance, and the peak of a large outbreak in 2019 was accurately forecasted. The proposed framework in this study suggests the potential of the GEE platform, the LSTM algorithm, as well as historical information for dengue risk forecasting, which can easily be extensively applied to other regions or globally for timely and practical dengue forecasts.
- Conference Article
- 10.23919/cinc49843.2019.9005808
- Dec 30, 2019
Developing an objective and efficient computer-aided tool for early detection of sepsis has become a promising research topic. In this paper, we present two methods for early prediction of sepsis from clinical data. One is neural network-based method and the other is eXtreme Gradient Boosting (XGBoost) based method. Considering the temporal relationship between clinical data from sepsis patients in the ICU, we built a Long Short-Term Memory (LSTM) network to extract the intrinsic relation between different indicators in clinical data and meanwhile model the temporal dependencies, which only uses the previous information not future information to predict the results. Neural networks have made great achievements in unstructured data, such as image processing and speech processing, while traditional machine learning methods are better at processing structured data than neural networks. Thus, we trained an XGBoost model on the pre-processed data for improving the prediction accuracy. In official phase, we only used the first seven vital signs in our network, on test set A, the LSTM-based method has the utility score is 0.267 and the score of XGBoost-based method is 0.392. We submit the latter method as the final entry and the official final test utility score is 0.313. Our team name is CQUPT_Just_Try, and the ranking is 15th.
- Research Article
- 10.32996/jefas.2025.7.1.1
- Jan 5, 2025
- Journal of Economics, Finance and Accounting Studies
In this paper, we develop a method based on a deep learning method in financial market prediction, which includes BRICS economies as the test cases. Financial markets are rife with volatility that is affected by a "bed of complexity," coddled by local and distal factors. To leverage these vast datasets both deep learning models such as Convolutional Neural Networks (CNNs), Long Short Term Memory (LSTM) networks as well as hybrid architectures are used in this study. The paper evaluates the predictive accuracy of the models, and by so doing, identifies their strengths in predicting temporal dependencies and intricate market patterns. In particular, deep learning techniques are applied to case studies of individual countries in the BRICS to highlight the application of deep learning to disparate country specific problems, such as liquidity crises and market shocks. These findings show that classical statistical methods are outperformed by deep learning systems in a precise and reliable financial forecasting. This research highlights the ability of AI driven systems to change financial decision making processes, improving investor confidence and improving economic stability in BRICS nations. This study also readers the value of deep learning in financial market analysis, especially in economies in the developing countries. Application of techniques and architectures e.g. Convolutional Neural Networks (CNNs) that excel at identifying spatial patterns, and Long Short-Term Memory (LSTM) networks renowned for their prowess on sequential and time series data, for real world market prediction are explained. In addition, the research discusses hybrid architectures which extend knowledge, fusing strengths of both architectures to improve prediction accuracy and how deep learning develops to solve particular financial challenges. Through reading these notes readers get exposed to data preprocessing techniques such as normalization and feature selection which are important for boosting deep learning performance. The paper also includes an introduction to the evaluation of models using MSE and R-squared values for validating them in terms of reliable outputs. This research combines deep learning theory and practical case study to offer a useful educational resource for students, researchers, and practitioners who want to apply AI in financial forecasting in complex and dynamic global markets.
- Research Article
9
- 10.1371/journal.pone.0296486
- Apr 17, 2024
- PloS one
Crime remains a crucial concern regarding ensuring a safe and secure environment for the public. Numerous efforts have been made to predict crime, emphasizing the importance of employing deep learning approaches for precise predictions. However, sufficient crime data and resources for training state-of-the-art deep learning-based crime prediction systems pose a challenge. To address this issue, this study adopts the transfer learning paradigm. Moreover, this study fine-tunes state-of-the-art statistical and deep learning methods, including Simple Moving Averages (SMA), Weighted Moving Averages (WMA), Exponential Moving Averages (EMA), Long Short Term Memory (LSTM), Bi-directional Long Short Term Memory (BiLSTMs), and Convolutional Neural Networks and Long Short Term Memory (CNN-LSTM) for crime prediction. Primarily, this study proposed a BiLSTM based transfer learning architecture due to its high accuracy in predicting weekly and monthly crime trends. The transfer learning paradigm leverages the fine-tuned BiLSTM model to transfer crime knowledge from one neighbourhood to another. The proposed method is evaluated on Chicago, New York, and Lahore crime datasets. Experimental results demonstrate the superiority of transfer learning with BiLSTM, achieving low error values and reduced execution time. These prediction results can significantly enhance the efficiency of law enforcement agencies in controlling and preventing crime.
- Research Article
- 10.1142/s0129156425409222
- Oct 4, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425403250
- Sep 30, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408113
- Aug 19, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408137
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408289
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408678
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408587
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408332
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408563
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Research Article
- 10.1142/s0129156425408472
- Aug 13, 2025
- International Journal of High Speed Electronics and Systems
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.