Abstract

A recent article showed that the objective function of the online weight noise injection algorithm is not equal to the training set error of faulty radial basis function (RBF) networks under the weight noise situation (Ho et al., 2010). Hence the online weight noise injection algorithm is not able to optimize the training set error of faulty networks with multiplicative weight noise. This paper proposes an online learning algorithm to tolerate multiplicative weight noise. Two learning rate cases, namely fixed learning rate and adaptive learning rate, are investigated. For the fixed learning rate case, we show that if the learning rate μ is less than 2/(σb2+maxi‖ϕ(xi)|2), then the online algorithm converges, where xi׳s are the training input vectors, σ2b is the variance of the multiplicative weight noise, ϕ(xi)=[ϕ1(xi),…,ϕM(xi)]T, and ϕj(·) is the output of the j-th RBF node. In addition, as μ→0, the trained weight vector tends to the optimal solution. For the adaptive learning rate case, let the learning rates {μk} be a decreasing sequence and limk→∞μk=0, where k is the index of learning cycles. We prove that if ∑k=1∞μk=∞ and ∑k=1∞μk2<∞, then the weight vector converges to the optimal solution. Our simulation results show that the performance of the proposed algorithm is better than that of the conventional online approaches, such as the online weight decay and weight noise injection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call