Human motion prediction (HMP) aims to predict future human motions from historical pose sequences. Extensive efforts have adopted the Transformers or Graph Neural Networks (GNNs) to capture the spatio-temporal relationships between poses and thus incorporate the contextual information and complex behavior relationships for motion inference. However, most existing approaches treat the HMP task as a deterministic problem, thus resulting in poor diversity and long tail problems. This study attributes such issues to positional bias within the Transformers and the lack of degrees of freedom within the predictive model. Hence, we propose a novel Multi-degree Tail-aware Attention Network (MTAN) involving a tail-aware attention mechanism and a multi-degree feature representation strategy. Specifically, we introduce a tail-aware attention mechanism to adeptly capture spatio-temporal dependencies that accommodate both head and tail actions. Based on CVAE, the multi-degree feature representation strategy learns to capture temporal diversity by learning the joint distribution of observed and future sequences. Ultimately, we leverage GCN to model spatial dependencies effectively, culminating in a comprehensive spatiotemporal prediction model. We evaluate the effectiveness of our approach using three benchmark datasets, including Human3.6M, AMASS, and 3DPW. The results demonstrate that our approach surpasses state-of-the-art transformer methods, establishing its superiority in HMP.