In the domain of multi-view multi-label (MVML) learning, features are distributed across various views, each offering multiple semantic representations. While existing approaches aim to balance commonality and complementarity within the view space, the inconsistency in label space has been underexplored, revealing the inadequacy of assuming uniform labels. Thus, there is a pressing need to explore the relationship among view-specific labels. Our method diverges from previous approaches by focusing on imposing constraints tailored to learning view-specific labels. We aim to preserve common characteristics through inter-view relationships while retaining specific traits inherent to each view's instances. By employing these strategies, we establish a clear mapping between labels and feature representations, enabling precise feature weighting. Convergence to the optimal feature set is achieved through multiplicative updating rules. Our method demonstrates superiority across most of cases through comprehensive experimental analysis compared to existing state-of-the art alternatives.