A good document summary should summarize the core content of the text. Research on automatic text summarization attempts to solve this problem. The encoder-decoder model is widely used in text summarization research. Soft attention is used to obtain the required contextual semantic information during decoding. However, due to the lack of access to the key features, the generated summary deviates from the core content. In this paper, we proposed an encoder-decoder model based on a double attention pointer network (DAPT). In DAPT, the self-attention mechanism collects key information from the encoder, the soft attention and the pointer network generate more coherent core content, and the fusion of both generates accurate and coherent summaries. In addition, the improved coverage mechanism is used to address the repetition problem and improve the quality of the generated summaries. Simultaneously, scheduled sampling and reinforcement learning (RL) are combined to generate new training methods to optimize the model. Experiments on the CNN/Daily Mail dataset and the LCSTS dataset show that our model performs as well as many state-of-the-art models. The experimental analysis shows that our model achieves higher summarization performance and reduces the occurrence of repetition.