We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum that will we allow the network to grow by adding a hidden-layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden-layer neuron. Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures (with the least possible number of hidden-layer neurons) and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using a widely used benchmark problem, i.e., the parity problem. Many researchers have studied the neural network training problem, and many algorithms have been reported. Although there have been many successful applications, there are still a number of issues that have not been completely resolved. These include the determination of the number of hidden-layer neurons, and the convergence as well as the speed of convergence in training. We say that a training is con- vergent if the training algorithm can eventually find a solution (i.e., a trained neural network) to the problem at hand without human inter- vention. This implies, in many cases, that the training algorithm can escape from local minima which the algorithm may visit during the course of a neural network training. Techniques reported in the liter- ature to deal with the local minimum problem (i.e., the convergence problem) include weight scaling (6), (13) and dynamic tunneling (14). The number of hidden-layer neurons is one of the most important considerations when solving problems using multilayered feedforward neural networks. An insufficient number of hidden-layer neurons gen- erally results in the network's inability to solve a particular problem, while too many hidden-layer neurons may result in a network with poor generalization performance. The required number of hidden nodes de- pends on the dimension of the input space and the number of sepa- rable regions required to solve a particular classification (mapping) problem (11), (16). Choosing an insufficient number of hidden neu- rons leads to an overdetermined problem since there are not enough