Text Classification (TC) is a fundamental task in the information retrieval community. Nowadays, the mainstay TC methods are built on the deep neural networks, which can learn much more discriminative text features than the traditional shallow learning methods. Among existing deep TC methods, the ones based on Graph Neural Network (GNN) have attracted more attention due to the superior performance. Technically, the GNN-based TC methods mainly transform the full training dataset to a graph of texts; however, they often neglect the dependency between words, so as to miss potential semantic information of texts, which may be significant to exactly represent them. To solve the aforementioned problem, we generate graphs of words instead, so as to capture the dependency information of words. Specifically, each text is translated into a graph of words, where neighboring words are linked. We learn the node features of words by a GNN-like procedure and then aggregate them as the graph feature to represent the current text. To further improve the text representations, we suggest a contrastive learning regularization term. Specifically, we generate two augmented text graphs for each original text graph, we constrain the representations of the two augmented graphs from the same text close and the ones from different texts far away. We propose various techniques to generate the augmented graphs. Upon those ideas, we develop a novel deep TC model, namely Text-level Graph Networks with Contrastive Learning (TGN cl ). We conduct a number of experiments to evaluate the proposed TGN cl model. The empirical results demonstrate that TGN cl can outperform the existing state-of-the-art TC models.