Graph contrastive learning has emerged as a powerful technique for dealing with graph noise and mining latent information in networks, that has been widely applied in GNN-based collaborative filtering. Traditional graph contrastive learning methods commonly generate multiple augmented views, and then learn node representations by maximizing the consistency between these views. However, on one hand, manual view construction methods necessitate expert knowledge and a trial-and-error process. On the other hand, adaptive view construction methods require decoders which results in increased training costs. To address the aforementioned limitations, in this paper, we propose the Adaptive Denoising Graph Contrastive Learning with Memory Graph Attention for Recommendation (ADGA) framework. Firstly, we introduce the memory graph attention mechanism to capture node attention during multi-hop information aggregation. Then, unlike previous methods that required additional node representations to generate views, ADGA proposes, for the first time, directly using attention to adaptively generate structure-aware contrastive learning views. It reduces the training cost of the model and improves the cross-view consistency of node representations, that offers a new paradigm for adaptive graph contrastive learning. Experimental results on three real-world datasets demonstrate that ADGA achieves state-of-the-art performance in recommendation tasks. The code is available at https://github.com/Andrewsama/ADGA.