In this brief paper, the Real Time Recurrent Learning (RTRL) algorithm for training fully recurrent neural networks in real time, is extended for the case of a recurrent neural network whose inputs, outputs, weights and activation functions are complex. A practical definition of the complex activation function is adopted and the complex form of the conventional RTRL algorithm is derived. The performance of the proposed algorithm is demonstrated with an application in complex communication channel equalization.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>