Abstract

Radial Basis Function Neural Network (RBFNN) is a class of Artificial Neural Network (ANN) that contains hidden layer processing units (neurons) with nonlinear, radially symmetric activation functions. Consequently, RBFNN has extensively suffered from significant computational error and difficulties in approximating the optimal hidden neuron, especially when dealing with Boolean Satisfiability logical rule. In this paper, we present a comprehensive investigation of the potential effect of systematic Satisfiability programming as a logical rule, namely 2 Satisfiability (2SAT) to optimize the output weights and parameters in RBFNN. The 2SAT logical rule has extensively applied in various disciplines, ranging from industrial automation to the complex management system. The core impetus of this study is to investigate the effectiveness of 2SAT logical rule in reducing the computational burden for RBFNN by obtaining the parameters in RBFNN. The comparison is made between RBFNN and the existing method, based on the Hopfield Neural Network (HNN) in searching for the optimal neuron state by utilizing different numbers of neurons. The comparison was made with the HNN as a benchmark to validate the final output of our proposed RBFNN with 2SAT logical rule. Note that the final output in HNN is represented in terms of the quality of the final states produced at the end of the simulation. The simulation dynamic was carried out by using the simulated data, randomly generated by the program. In terms of 2SAT logical rule, simulation revealed that RBFNN has two advantages over HNN model: RBFNN can obtain the correct final neuron state with the lowest error and does not require any approximation for the number of hidden layers. Furthermore, this study provides a new paradigm in the field feed-forward neural network by implementing a more systematic propositional logic rule.

Highlights

  • Artificial neural network (ANN) is a powerful data processing model which has been widely studied and applied by practitioners and researchers due to its capacity and capability in handling and representing complex-non-linear problems

  • Since the popularized version of this perspective emerged from programming in Hopfield Neural Network (HNN), we will examine the structure of HNN as a doing logic programming in Hopfield Neural Network (HNN), we will examine the structure of benchmark of ANN against our proposed network which is Radial Basis Function Neural Network (RBFNN)

  • We have shown the capability of P2SAT in optimizing the final state obtained by the RBFNN

Read more

Summary

Introduction

Artificial neural network (ANN) is a powerful data processing model which has been widely studied and applied by practitioners and researchers due to its capacity and capability in handling and representing complex-non-linear problems. Despite the advantages of logic programming in RBFNN, finding the optimal parameters such as center and width using conventional logical rule has a common limitation in terms of performance error. HornSAT is trapped in a suboptimal solution In this case, the choice of the logical rule is critical in creating optimal logic programming in RBFNN. The choice of the logical rule is critical in creating optimal logic programming in RBFNN To this end, the implementation of the mentioned work only manages to train HornSAT that expose RBFNN with the computational burden. (2) A novel feed forward neural network is proposed by implementing 2SAT as a systematic logical rule in RBFNN and obtain the optimal value of parameters (center and width). The simulation results indicate a significant improvement in terms of performance evaluation when the 2SAT logical rule is implemented into RBFNN

Satisfiability Programming in Artificial Neural Network
Satisfiability Programming in Hopfield Neural Network
Experimental Setup
Findings
Result and Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call