Currently, the non-transparent computing process of deep learning has become a significant reason hindering its further development. The Neural-Symbolic (NS) system formed by integrating logic rules into neural networks has attracted increasing attention owing to its direct interpretability. Embedding symbolic logical formulas into a low-dimensional continuous space provides an effective way for the NS system. However, current studies are all constrained by the modeling ability for its syntactic structure and fail to preserve the intrinsic semantics in embeddings, which causes poor performance on downstream reasoning tasks. To this end, this paper proposes a novel method of <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Con</b> trastive <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">G</b> raph <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">R</b> epresentations (ConGR) for logical formulas embedding. First, to improve the modeling ability for the syntactic structure, ConGR introduces a densely connected graph convolutional network (GCN) with an attention mechanism to process syntax parsing graphs of formulas. In this way, discriminative local and global embeddings of formulas are obtained at the syntax level. Second, the contrastive instances (positive or negative) for each anchor formula are generated by the transformation under the guidance of logical properties. To preserve semantic information, two types of contrast, global-local and global-global, are carried out to refine formula embeddings. Extensive experiments demonstrate that ConGR obtains superior performance against state-of-the-art baselines on entailment checking and premise selection datasets.