In multi-view multi-label learning (MVML), the accuracy of feature weights is pivotal for establishing feature order. However, conventional MVML methods often struggle with integrating distinct information from multiple views effectively, leading to unclear segmentation and potential noise introduction. To address this challenge, this paper proposes an anchor-based latent representation method for global view learning in MVML. Specifically, we encode inherent information from each view to derive a candidate multi-view representation. Anchors extracted from both the candidate view and the global view are then constrained to approximate equality in the latent space. Furthermore, a carefully designed view matrix serves as supplement, seamlessly integrated into the reconstruction process to augment information. The convergence of results is subsequently validated using multiplicative update rules. Experimental findings showcase the superior performance of our proposed method across various multi-view datasets.