AbstractThe competitive associative net CAN2 is a neural net that provides piecewise‐linear approximation through incremental learning of a nonlinear function by the associative net function. The effectiveness of this net has already been shown by applications to function approximation, control, precipitation estimation, and other problems. The learning algorithm essentially includes competitive learning based on the gradient method and suffers from the local solution problem. The purpose of this paper is to circumvent this problem. First, the asymptotic optimality condition, that is, the optimality condition for minimizing the mean‐square error of the approximation function, is derived for the case in which the net is composed of a very large number of units. This condition can be used to decide whether the weight assignment each time is close to the optimal solution, and also to suggest weight assignments closer to the optimal solution. The condition is incorporated into a learning algorithm based on the gradient method, resulting in the following learning algorithm: when it is decided that the weight assignment obtained by the gradient method is not close to the optimal solution, the weights of some units are reinitialized so that the weight assignment is closer to the optimal solution. Finally, numerical experiments are performed in which the proposed learning algorithm is applied to several benchmark functions. The effectiveness of the proposed algorithm is verified. The results are compared to the results of experiments by BPN (Back‐Propagation Net), RBFN (Radial Basis Function Net), and SVR (Support Vector Regression), and it is shown that CAN2 based on the proposed algorithm has excellent function approximation performance. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(9): 85–96, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.10538