In multi-label learning, addressing the class imbalance issue is of paramount importance. Oversampling methods are preferred as they offer a more general solution independent of the model choice, i.e., they alleviate the imbalance of datasets by augmenting instances in the pre-processing step. Existing neighbor-based oversampling methods employ an empirical number of neighbors (k=5) to identify the local region for new instances creation. However, a single fixed k value cannot fit all labels, because every label usually has its own distinct distribution and complexity. Furthermore, the label assignment for synthetic instances usually depends on the statistics of individual labels within the corresponding neighborhood, ignoring the informative correlation among labels. To overcome these limitations, we propose an oversampling method called Multi-Label Oversampling with Natural neighbor and label Correlation (MLONC). Our approach offers three main advantages: (1) the adaptive number of neighbors for each label related to the data complexity is obtained via natural neighbor detection; (2) it encourages generating more instances proximate to the decision boundary of highly imbalanced labels, and diminishes the impact of outliers; (3) exploitation of label correlation in label assignment enhances the quality of the synthetic instances. Experimental results demonstrate the effectiveness of MLONC under various base classifiers.