Natural Language Generation (NLG) is a critical component of spoken dialogue systems and it has a significant impact both on usability and perceived quality. Most existing NLG approaches in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. Moreover, these limitations also add significantly to development costs and make the delivery of cross-domain, cross-lingual dialogue systems especially complex and expensive. The first contribution of this paper is to present RNNLG, a Recurrent Neural Network (RNN)-based statistical natural language generator that can learn to generate utterances directly from dialogue act – utterance pairs without any predefined syntaxes or semantic alignments. The presentation includes a systematic comparison of the principal RNN-based NLG models available. The second contribution, is to test the scalability of the proposed system by adapting models from one domain to another. We show that by pairing RNN-based NLG models with a proposed data counterfeiting method and a discriminative objective function, a pre-trained model can be quickly adapted to different domains with only a few examples. All of the findings presented are supported by both corpus-based and human evaluations.