This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feed-forward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing and regularization by weight decay, as well as a combination of both methods to combine their advantages. We show that in all cases stable approximations are obtained converging to the approximated function in a desired Sobolev space as the noise in the data tends to zero (in the weaker L 2-norm) if the regularization parameter and the number of units in the network are chosen appropriately. Under additional smoothness assumptions we are able to show convergence rates results in terms of the noise level and the number of units in the network. In addition, we show how the theoretical results can be applied to the important classes of perceptrons with one hidden layer and to translation networks. Finally, the performance of the different approaches is compared in some numerical examples.