To optimize the hidden center matrix, Gaussian RMS width vector and the hidden-output weight matrix of RBF neural network, Grey Wolf Optimizer (GWO) and its several variants have been introduced. With the combination of the three parameters as the position vector of the grey wolf in GWO, selecting half of the average squared error as the optimizing object function, the RBF neural network based on GWO is named as RBF-GWO network. In the processing of training the parameters for RBF-GWO, the difference between the actual output of RBF-GWO network and the desired output value is considered as the guide of updating the position vector for the grey wolf, and in each iteration the optimal parameter values are stored in the position vector of Wolf α which will be returned to the RBF-GWO. The parameter training is completed until the end conditions of the iteration are satisfied. To verify the validity of RBF-GWO network, the continuous function approximation experiment and the chaotic synchronization anti-control experiment have been done in turn. Not only it is proved that the proposed RBF-GWO network is effective, but it is also found that the RBF-GWO network based on the WGWO has a relatively strong adaptive capacity in all experiments from the overall view.