Multi-person pose estimation is an important field in computer vision. Due to the lower time complexity, the bottom-up approaches have recently received more attention in multi-person 2D pose estimation, however, they are more sensitive to challenges in real-world scenarios. In this paper, we propose a multi-person pose estimation algorithm based on the Double Anchor Embedding (DAE), which shows that bottom-up algorithms are still competitive in precision. Firstly, for reducing the modeling difficulty of the detection task we divide the human joints into upper and lower half groups which are internally continuous and highly correlated. Accordingly, a novel joint affinity cue, called Double Anchor Embedding is designed, which can help the network effectively extract the information of both local contexts and global contexts, so that can better cope with occluded scenes and complex postures. Secondly, the parallel greedy joint inference algorithm is proposed to alleviate the mismatching problem of distant joints in the post-processing stage, which can also accelerate the matching process to some extent. Extensive experiments on two challenging datasets demonstrate the effectiveness and potential of our proposed framework, which is comparable to the current state-of-the-art methods.