Conjugate gradient method has been verified to be one effective strategy for training neural networks due to its low memory requirements and fast convergence. In this paper, we propose an efficient conjugate gradient method to train fully complex-valued network models in terms of Wirtinger differential operator. Two ways are adopted to enhance the training performance. One is to construct a sufficient descent direction during training by designing a fine tuning conjugate coefficient. Another technique is to pursue the optimal learning rate instead of a fixed constant in each iteration which is determined by employing a generalized Armijo search. In addition, we rigorously prove its weak and strong convergence results, i.e., the gradient norms of objective function with respect to weights approach zero along with the increasing iterations and the weight sequence tends to the optimal point. To verify the effectiveness and rationality of the proposed method, four illustrated simulations have been performed on both typical regression and classification problems.