Minority oversampling is currently one of the most popular and effective methods for handling imbalanced data. However, oversampling that relies on the observations of the minority class to generate new samples is not applicable in the scenario of imbalanced data with extremely scarce minority samples, because the strongly underrepresented minority class does not contain enough information to support the oversampling process. Since some recent studies have exhibited the effectiveness of using majority information to bootstrap oversampling, the neglect of class overlap in the sampling process would increase the overlapping degree and complicate the decision boundary. To this end, this paper proposes a Mahalanobis distance and Local information based OverSampling (MLOS) for highly imbalanced class-overlapped data. MLOS first employs the majority density to guide the sample synthesis, with Mahalanobis distance to extract the majority probability contour. Then for each minority seed sample, to avoid the generation of overlapping samples, MLOS constrain the synthetic process by finding the auxiliary sample (in its 5 nearest neighbors) with similar probability density value to the seed. Finally, MLOS uses a pair-wise data cleaning process to improve the visibility of the decision boundary according to the probability density of synthetic samples. Comparative experiments conducted on 16 highly imbalanced class-overlapped datasets, using 17 different methods, demonstrates the superiority of our proposed method in terms of three popular evaluation metrics AUC, G-mean and Recall for imbalance classification. The source code of MLOS is available at https://github.com/ytyancp/MLOS.