Dual simulated annealing soft decoder for linear block codes

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

<span lang="EN-US">This paper proposes a new approach to soft decoding for linear block codes called dual simulated annealing soft decoder (DSASD) which utilizes the dual code instead of the original code, using the <a name="_Hlk199254578"></a>simulated annealing algorithm as presented in a previously developed work. The DSASD algorithm demonstrates superior decoding performance across a wide range of codes, outperforming classical simulated annealing and several other tested decoders. We conduct a comprehensive evaluation of the proposed algorithm's performance, optimizing its parameters to achieve the best possible results. Additionally, we compare its decoding performance and algorithmic complexity with other decoding algorithms in its category. Our results demonstrate a gain in performance of approximately 2.5 dB at a bit error rate (BER) of 6×10⁻⁶ for the LDPC (60,30) code.</span>

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3844/jcssp.2018.1174.1189
Simulated Annealing Decoder for Linear Block Codes
  • Aug 1, 2018
  • Journal of Computer Science
  • Lahcen Niharmine + 3 more

In this study, we introduce a novel soft decoder, the first of its kind, for linear block codes, based on Simulated Annealing algorithm (SA). The main enhancement in our contribution which let our decoder over performs with large gain (about 3 dB at 710-4) the classical SA approach, is to take the most reliable information set of the received codeword as a start solution and also according to this reliability generate neighbor’s solutions. Besides, our algorithm performance is enhanced by reducing search space when we involve the code error correcting capability parameter. The performance of the designed algorithm is investigated through a parameter tuning process and then compared with other various decoding algorithms in terms of decoding performance and algorithmic complexity. Simulation results, show that our algorithm over performs its competitor decoders while keeping minimum computation cost. In fact, our algorithm has large gain over Chase-2 and GAMD, furthermore, it over performs the most efficient and up to date DDGA decoder by 2 dB at 10-5 for RS codes.

  • Conference Article
  • 10.1109/icccnt.2017.8204017
Soft decision multi-stage threshold decoding with sum-product algorithm
  • Jul 1, 2017
  • Shafkat Islam + 2 more

Sum-Product Algorithm (SPA) is usually used in LDPC codes. This paper presents a method to improve the performance of soft decision multi-stage threshold decoding (SMTD) by using SPA. Parity check coding is used to improve further. This paper also presents a brief comparison among various multi-stage threshold decoding (MTD) techniques. SPA concatenated (ConC) with SMTD and parity check code is also compared with specific types of repeat accumulate (RA) code and LDPC code. The SPA ConC with SMTD outperforms the conventional multi-stage threshold decoding techniques having a performance gain of 0.3 dB to 0.7 dB at bit error rate (BER) of 10−5. Furthermore, the SPA ConC with SMTD and parity check code gives 0.2 dB performance gain over the SPA ConC with only SMTD at BER of 10−5.

  • PDF Download Icon
  • Research Article
  • 10.11144/javeriana.iyu21-2.rest
Real-Time Estimation of Some Thermodynamics Properties During a Microwave Heating Process
  • Jun 12, 2017
  • Ingenieria y Universidad
  • Edgar Garcia + 2 more

<p class="MsoNormal"><span lang="EN-US">This work considers the prediction in real time of physicochemical parameters of a sample heated in a uniform electromagnetic field. The thermal conductivity (K)</span><!--[if gte msEquation 12]><m:oMath><i
 style='mso-bidi-font-style:normal'><span lang=EN-US style='font-family:"Cambria Math","serif"'><m:r>(</m:r><m:r>K</m:r><m:r>)
 </m:r></span></i></m:oMath><![endif]--><!--[if !msEquation]--><!--[endif]--><span lang="EN-US">and the </span><span lang="EN">combination of density and heat capacity terms (pc)</span><span lang="EN"> were estimated as a demonstrative example.</span><span lang="EN-US">The sample (with known geometry) was subjected to electromagnetic radiation, generating a uniform and time constant volumetric heat flow within it. Real temperature profile was simulated adding white Gaussian noise to the original data, obtained from the theoretical model. For solving the objective function, simulated annealing and genetic algorithms, along with the traditional Levenberg-Marquardt method were used for comparative purposes. Results show similar findings of all algorithms for three simulation scenarios, as long as the signal to noise ratio sits at least at 30 dB. It means for practical purposes, that the estimation procedure presented here requires both, a good experimental design and an electronic instrumentation correctly specified.</span><span lang="EN-US">If both requirements are satisfied simultaneously, it is possible to estimate these type of parameters on-line, without need for an additional experimental setup.</span></p><p class="MsoNormal"><span lang="EN-US">This work considers the prediction in real time of physicochemical parameters of a sample heated in a uniform electromagnetic field. The thermal conductivity </span><!--[if gte msEquation 12]><m:oMath><i
 style='mso-bidi-font-style:normal'><span lang=EN-US style='font-family:"Cambria Math","serif"'><m:r>(</m:r><m:r>K</m:r><m:r>)
 </m:r></span></i></m:oMath><![endif]--><!--[if !msEquation]--><!--[endif]--><span lang="EN-US">and the </span><span lang="EN">combination of density and heat capacity terms (</span><!--[if gte msEquation 12]><m:oMath><i
 style='mso-bidi-font-style:normal'><span lang=EN style='font-family:"Cambria Math","serif";
 mso-ansi-language:EN'><m:r>ρc</m:r><m:r>)</m:r></span></i></m:oMath><![endif]--><!--[if !msEquation]--><!--[endif]--><span lang="EN"> were estimated as a demonstrative example.</span><span lang="EN-US">The sample (with known geometry) was subjected to electromagnetic radiation, generating a uniform and time constant volumetric heat flow within it. Real temperature profile was simulated adding white Gaussian noise to the original data, obtained from the theoretical model. For solving the objective function, simulated annealing and genetic algorithms, along with the traditional Levenberg-Marquardt method were used for comparative purposes. Results show similar findings of all algorithms for three simulation scenarios, as long as the signal to noise ratio sits at least at 30 dB. It means for practical purposes, that the estimation procedure presented here requires both, a good experimental design and an electronic instrumentation correctly specified.</span><span lang="EN-US">If both requirements are satisfied simultaneously, it is possible to estimate these type of parameters on-line, without need for an additional experimental setup.</span></p>

  • Conference Article
  • 10.1109/csnt.2015.161
Design and Implementation of Low Bit Error Rate of LDPC Decoder
  • Apr 1, 2015
  • Ashlesha P Kshirsagar + 3 more

Many classes of high-performance Low-density parity check codes are based on parity check matrices composed of permutation sub matrices. The emulation-simulation framework further allows the algorithm and implementation to be iteratively redefined to improve the error floor performance of message passing decoder. Log-Like hood-Ratio (LLR) based Belief-Propagation (BP) algorithm is presented for Low Density Parity Check codes. Numerically accurate representation of check node update computation used in LLR-BP decoding is described. The implementation of Sum-Product algorithm (SPA) within Low Density Parity Check Code (LDPC) decoder is described in this paper and the correction term is used to improve the decoding performance of min-sum algorithm (MSA). Quantization and log-tanh function approximation in sum-product decoder strongly affect which absorbing set dominates in error floor region. For LDPC decoder, bit error rate (BER) decreases with increase in the signal to noise ratio.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 15
  • 10.14488/bjopm.2020.011
Solving a Periodic Capacitated Vehicle Routing Problem using Simulated Annealing Algorithm for a Manufacturing Company
  • Jan 1, 2020
  • Brazilian Journal of Operations & Production Management
  • Erdal Aydemir + 1 more

Goal: This paper aims to implement a periodic capacitated vehicle routing problem with simulated annealing algorithm using a real-life industrial distribution problem and to recommend it to industry practitioners. The authors aimed to achieve high-performance solutions by coding a manually solved industrial problem and thus solving a real-life vehicle routing problem using Julia language and simulated annealing algorithm. Design / Methodology / Approach: The vehicle routing problem (VRP) that is a widely studied combinatorial optimization and integer programming problem, aims to design optimal tours for a fleet of vehicles serving a given set of customers at different locations. The simulated annealing algorithm is used for periodic capacitated vehicle routing problem. Julia is a state-of-art scientific computation language. Therefore, a Julia programming language toolbox developed for logistic optimization is used. Results: The results are compared to savings algorithms from Matlab in terms of solution quality and time. It is seen that the simulated annealing algorithm with Julia gives better solution quality in reasonable simulation time compared to the constructive savings algorithm. Limitations of the investigation: The data of the company is obtained from 12 periods with a history of four years. About the capacitated vehicle routing problem, the homogenous fleet with 3000 meters/vehicle is used. Then, the simulated annealing design parameters are chosen rule-of-thumb. Therefore, better performance can be obtained by optimizing the simulated annealing parameters. Practical implications: In this study, a furniture roving parts manufacturing company that have 30 customers in Denizli, an industrial city in the west part of Turkey, is investigated. Before the scheduling implementation with Julia, the company has no effective and efficient planning as they have been using spreadsheet programs for vehicle scheduling solutions. In this study, the solutions with Julia are used in practice for the distribution with higher utilization rate and minimum number of vehicles. The simulated annealing and savings algorithms are compared in terms of solution time and performance. The savings algorithm has produced better solution time, the simulated annealing approach has minimum total distance objective value, minimum number of required vehicles, and maximum vehicle utilization rate for the whole model. Thus, this paper can contribute to small scale business management in the sense of presenting a digitalization solution for the vehicle scheduling solution. Also, Julia application of simulated annealing for vehicle scheduling is demonstrated that can help both academicians and practitioners in organizations, mainly in logistics and distribution problems. Originality / Value: The main contribution of this study is a new solution method to capacitated vehicle routing problems for a real-life industrial problem using the advantages of the high-level computing language Julia and a meta-heuristic algorithm, the simulated annealing method. Keywords: Capacitated vehicle routing problem, Simulated annealing algorithm, Julia programming language.

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/dasip.2011.6136889
A flexible NoC-based LDPC code decoder implementation and bandwidth reduction methods
  • Nov 1, 2011
  • Carlo Condo + 1 more

The need for efficient and flexible LDPC (Low Density parity Check) code decoders is rising due to the growing number and variety of standards that adopt this kind of error correcting codes in wireless applications. From the implementation point of view, the decoding of LDPC codes implies intensive computation and communication among hardware components. These processing capabilities are usually obtained by allocating a sufficient number of processing elements (PEs) and proper interconnect structures. In this paper, Network on Chip (NoC) concepts are applied to the design of a fully flexible decoder, capable to support any LDPC code with no constraints on code structure. It is shown that NoC based decoders also achieve relevant throughput values, comparable to those obtained by several specialized decoders. Moreover, the paper explores the area and power overhead introduced by the NoC approach. In particular, two methods are proposed to reduce the traffic injected in the network during the decoding process, namely early stopping of iterations and message stopping. These methods are usually adopted to increase throughput. On the contrary, in this paper, we leverage iteration and message stopping to cut the area and power overhead of NoC based decoders. It is shown that, by reducing the traffic injected in the NoC and the number of iterations performed by the decoding algorithm, the decoder can be scaled to lower degrees of parallelism with small losses in terms of BER (Bit Error Rate) performance. VLSI synthesis results on a 130 nm technology show up to 50% area and energy reduction while maintaining an almost constant throughput.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.3390/electronics9010122
Iterative Decoding of LDPC-Based Product Codes and FPGA-Based Performance Evaluation
  • Jan 8, 2020
  • Electronics
  • Weigang Chen + 5 more

Low-density parity-check (LDPC) codes have the potential for applications in future high throughput optical communications due to their significant error correction capability and the parallel decoding. However, they are not able to satisfy the very low bit error rate (BER) requirement due to the error floor phenomenon. In this paper, we propose a low-complexity iterative decoding scheme for product codes consisting of very high rate outer codes and LDPC codes. The outer codes aim at eliminating the residual error floor of LDPC codes with quite low implementation costs. Furthermore, considering the long simulation time of computer simulation for evaluating very low BER, the hardware platform is built to accelerate the evaluation of the proposed iterative decoding methods. Simultaneously, the fixed-point effects of the decoding algorithms are also be evaluated. The experimental results show that the iterative decoding of the product codes can achieve a quite low bit error rate. The evaluation using field programmable gate array (FPGA) also proves that product codes with LDPC codes and high-rate algebraic codes can achieve a good trade-off between complexity and throughput.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icc45855.2022.9839082
Implicit Partial Product-LDPC Codes Using Free-Ride Coding
  • May 16, 2022
  • Xiao Ma + 3 more

In this paper, we propose a new construction of product codes, where the whole information array is protected row-by-row by a low-density parity-check (LDPC) code while only a portion of the information array is protected column-by-column by an algebraic code. The most distinguished feature of the proposed product code is that, thanks to the free-ride coding technique, the additional column check bits are transmitted implicitly rather than explicitly. The constructed codes are referred to as implicit partial product-LDPC codes, which have the same rates as the row component LDPC codes. The decoding algorithm can be divided into four stages, including decoding of the free-ride codes, first-round decoding of the row codes, decoding of the column codes, and second-round decoding of the row codes by exploiting the messages associated with those successfully decoded columns. To predict the extremely low error rate of the doubly-protected (by both the row code and the column code) information bits, we derive an approximate upper bound. The simulation results show that, with a (3,6)-regular LDPC code of length 1024 as the component code, the proposed product code can lower the word error rate (WER) from 10<sup>&#x2212;2</sup> down to 10<sup>&#x2212;6</sup> at the SNR around 2 dB. The numerical results also show that the doubly-protected information bits are more reliable, which can have a bit error rate (BER) down to 10<sup>&#x2212;15</sup> at SNR around 2.6 dB as implied by the presented approximate upper bound.

  • Research Article
  • Cite Count Icon 1
  • 10.1142/s0218126622503042
Simulated Annealing Algorithm-Aided SC Decoder for Polar Codes
  • Aug 13, 2022
  • Journal of Circuits, Systems and Computers
  • Guiping Li + 2 more

A new decoding scheme aided by simulated annealing algorithm is proposed to further improve the decoding performance of successive cancellation (SC) for polar codes at the short block. We use simulated annealing to revise the decoding result of SC which cannot pass the CRC check. To generate the new neighbors, the decoder flips one bit from the set of the least unreliable information bits each time in the estimated source vector of SC decoding. Euclidean distance is used to measure the gap between the new neighbor solution and the received word so that the decoder can obtain a global optimal solution. Simulation shows that the proposed decoder has a performance gain about 0.5 dB in terms of frame error rate (FER) under short blocks in the additive white Gaussian noise (AWGN) channel compared to other basic decoders, while keeping a low time cost through a parameter tuning process.

  • Research Article
  • Cite Count Icon 6
  • 10.1504/ijor.2013.054436
Simulated annealing and imperialist competitive algorithm for minimising makespan in an open shop
  • Jan 1, 2013
  • International Journal of Operational Research
  • Fariborz Jolai + 2 more

This paper presents an imperialist competitive algorithm (ICA) and a simulated annealing (SA) algorithm in a non-preemptive open shop scheduling problem with job dependent setup times and sequence dependent transportation times to minimise the makespan (total completion time). The parameters of the algorithms are tuned by the response surface methodology (RSM). Based on classic approach, the scheduling problems are classified into small, medium and large sized problems. To evaluate the performance of the proposed ICA, its results are compared with the optimum ones obtained by GAMS for the small sized instances. Moreover, for the medium to large sized instances, the obtained solutions by the ICA are compared with the solutions of the effective method – SA algorithm – and the results are analysed.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-540-77224-8_3
Iterative List Decoding of LDPC Codes
  • Dec 16, 2007
  • Tom Høholdt + 1 more

In the last decade two old methods for decoding linear block codes have gained considerable interest, iterative decoding as first described by Gallager in [1] and list decoding as introduced by Elias [2]. In particular iterative decoding of low-density parity-check (LDPC) codes, has been an important subject of research, see e.g. [3] and the references therein. "Good" LDPC codes are often randomly generated by computer, but recently codes with an algebraic or geometric structure have also been considered e.g [3] and [4]. The performance of the iterative decoder is typically studied by simulations and a theoretical analysis is more difficult. In this paper we combine the two decoding methods and present an iterative list decoding algorithm. In particular we apply this decoder to a class of LDPC codes from finite geometries and show that the (73, 45, 10) projective geometry code can be maximum likelihood decoded with low complexity. Moreover the list decoding approach enables us to give a complete analysis of the performance in this case. We also discuss the performance of the list bit-flipping algorithm for longer LDPC codes. We consider hard-decision iterative decoding of a binary (n, k, d) code. For a received vector, y, we calculate an extended syndrome s = Hy′, where H is a parity check matrix, but usually has more than n - k rows. Let r denote the length of the syndrome. The idea of using extended syndromes was also used in [5]. Our approach is based on one of the common versions of bit flipping (BF) [3], where the schedule is such that the syndrome is updated after each flip. In each step we flip a symbol chosen among those positions that reduce the weight of the extended syndrome, which we refer to briefly as the syndrome weight, u. A decoded word is reached when u = 0. In this paper we consider a variation of the common algorithm in the form of a tree-structured search. Whenever there is a choice between several bits, all possibilities are tried in succession. The result of the decoding algorithm is, in general, a list of codewords, obtained as leaves of the search tree. This form of the bit flipping algorithm leads naturally to a solution in the form of a list of codewords at the same smallest distance from y [6]. This list decoding concept is somewhat different from list decoding in the usual sense of all codewords within a certain distance from y. The paper is a continuation of [7] including results on long codes from [8].

  • Research Article
  • 10.7840/kics.2013.38a.6.471
고품질 통신 시스템을 위한 LDPC 부호의 UEP 성능 분석
  • Jun 30, 2013
  • The Journal of Korea Information and Communications Society
  • Seog Kun Yu + 1 more

미래의 고품질 통신 시스템을 위해서는 더욱 강력한 오류제어기법과 메시지 심볼 당 비트수의 증가가 요구되고 있다. 멀티미디어 데이터에서 메시지 비트들은 서로 다른 중요도를 가질 수 있다. 그러므로 이 경우, EEP(equal error protection) 보다는 UEP(unequal error protection)를 사용하는 것이 더 효과적일 수 있다. 그리고 LDPC(low-density parity check) 부호는 Shannon 한계에 근접하는 우수한 성능을 보인다. 따라서 본 논문에서는 고품질 메시지 데이터에 대한 LDPC 부호의 UEP 효과를 분석한다. MSE(mean square error)와 BER(bit error rate)과 심볼당 비트수의 관계를 이론적으로 분석하고 모의실험을 통하여 증명한다. 이를 위하여 전체 메시지비트를 중요도에 따라 두 그룹으로 나눈 후 전체 부호율과 부호어 길이를 고정시키고 각 그룹의 메시지 비트수를 변화시켜가며 모의실험을 통하여 UEP 성능을 나타내었다. 이 결과를 통하여 심볼당 비트수, 전체 메시지비트에서 각 그룹의 비율, 그리고 각 그룹의 보호정도에 따른 LDPC 부호의 UEP 성능을 분석하였다. Powerful error control and increase in the number of bits per symbol should be provided for future high-quality communication systems. Each message bit may have different importance in multimedia data. Hence, UEP(unequal error protection) may be more efficient than EEP(equal error protection) in such cases. And the LDPC(low-density parity-check) code shows near Shannon limit error correcting performance. Therefore, the effect of UEP with LDPC codes is analyzed for high-quality message data in this paper. The relationship among MSE(mean square error), BER(bit error rate) and the number of bits per symbol is analyzed theoretically. Then, total message bits in a symbol are classified into two groups according to importance to prove the relationship by simulation. And the UEP performance is obtained by simulation according to the number of message bits in each group with the constraint of a fixed total code rate and codeword length. As results, the effect of UEP with the LDPC codes is analyzed by MSE according to the number of bits per symbol, the ratio of the message bits, and protection level of the classified groups.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 33
  • 10.1103/physreva.100.012330
Modified belief propagation decoders for quantum low-density parity-check codes
  • Jul 19, 2019
  • Physical Review A
  • Alex Rigby + 2 more

Quantum low-density parity-check codes can be decoded using a syndrome based $\mathrm{GF}(4)$ belief propagation decoder. However, the performance of this decoder is limited both by unavoidable $4$-cycles in the code's factor graph and the degenerate nature of quantum errors. For the subclass of CSS codes, the number of $4$-cycles can be reduced by breaking an error into an $X$ and $Z$ component and decoding each with an individual $\mathrm{GF}(2)$ based decoder. However, this comes at the expense of ignoring potential correlations between these two error components. We present a number of modified belief propagation decoders that address these issues. We propose a $\mathrm{GF}(2)$ based decoder for CSS codes that reintroduces error correlations by reattempting decoding with adjusted error probabilities. We also propose the use of an augmented decoder, which has previously been suggested for classical binary low-density parity-check codes. This decoder iteratively reattempts decoding on factor graphs that have a subset of their check nodes duplicated. The augmented decoder can be based on a $\mathrm{GF}(4)$ decoder for any code, a $\mathrm{GF}(2)$ decoder for CSS code, or even a supernode decoder for a dual-containing CSS code. For CSS codes, we further propose a $\mathrm{GF}(2)$ based decoder that combines the augmented decoder with error probability adjustment. We demonstrate the performance of these new decoders on a range of different codes, showing that they perform favorably compared to other decoders presented in literature.

  • PDF Download Icon
  • Research Article
  • 10.12688/f1000research.73581.1
Performance Analysis of Simulated Annealing and Genetic Algorithm on systems of linear equations
  • Dec 20, 2021
  • F1000Research
  • Md Shabiul Islam + 4 more

Problem solving and modelling in traditional substitution methods at large scale for systems using sets of simultaneous equations is time consuming. For such large scale global-optimization problem, Simulated Annealing (SA) algorithm and Genetic Algorithm (GA) as meta-heuristics for random search technique perform faster. Therefore, this study applies the SA to solve the problem of linear equations and evaluates its performances against Genetic Algorithms (GAs), a population-based search meta-heuristic, which are widely used in Travelling Salesman problems (TSP), Noise reduction and many more. This paper presents comparison between performances of the SA and GA for solving real time scientific problems. The significance of this paper is to solve the certain real time systems with a set of simultaneous linear equations containing different unknown variable samples those were simulated in Matlab using two algorithms-SA and GA. In all of the experiments, the generated random initial solution sets and the random population of solution sets were used in the SA and GA respectively. The comparison and performances of the SA and GA were evaluated for the optimization to take place for providing sets of solutions on certain systems. The SA algorithm is superior to GA on the basis of experimentation done on the sets of simultaneous equations, with a lower fitness function evaluation count in MATLAB simulation. Since, complex non-linear systems of equations have not been the primary focus of this research, in future, performances of SA and GA using such equations will be addressed. Even though GA maintained a relatively lower number of average generations than SA, SA still managed to outperform GA with a reasonably lower fitness function evaluation count. Although SA sometimes converges slowly, still it is efficient for solving problems of simultaneous equations in this case. In terms of computational complexity, SA was far more superior to GAs.

  • Research Article
  • Cite Count Icon 136
  • 10.1016/s0305-0548(97)00054-3
A systematic procedure for setting parameters in simulated annealing algorithms
  • Mar 1, 1998
  • Computers &amp; Operations Research
  • Moon-Won Park + 1 more

A systematic procedure for setting parameters in simulated annealing algorithms

More from: IAES International Journal of Artificial Intelligence (IJ-AI)
  • Research Article
  • 10.11591/ijai.v14.i5.pp4271-4278
AI-driven hyper-personalization and transfer learning for precision recruitment
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Nour Alqudah + 2 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp4363-4370
The effectiveness of ChatGPT in extracting architectural patterns and tactics
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Hind Milhem + 2 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp4113-4122
Educational data mining approach for predicting student performance and behavior using deep learning techniques
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Muniappan Ramaraj + 5 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp3835-3846
Early goat disease detection using temperature models: k-nearest neighbor, decision tree, naive Bayes, and random forest
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Fareza Ananda Putra + 1 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp4211-4225
Exploring social media sentiment patterns for improved cyberbullying detection
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Wael M S Yafooz + 5 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp3681-3692
Optimizing diabetes prediction: unveiling patient subgroups through clustering
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Rita Ganguly + 2 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp4250-4259
Intent detection in AI chatbots: a comprehensive review of techniques and the role of external knowledge
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Jemimah Kandaraj + 2 more

  • Research Article
  • 10.11591/ijai.v14.i5.pp3982-3993
A hybrid model for handling the imbalanced multiclass classification problem
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Esra'A Alshdaifat + 4 more

  • Journal Issue
  • 10.11591/ijai.v14.i5
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)

  • Research Article
  • 10.11591/ijai.v14.i5.pp4353-4362
Transformation of Islamic values in the era of artificial intelligence
  • Oct 1, 2025
  • IAES International Journal of Artificial Intelligence (IJ-AI)
  • Nur Faizin + 5 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon