The computational cost for implementing data privacy protection tends to rise as the dimensions increase, especially on correlated datasets. For this reason, a faster data protection mechanism is needed to handle high-dimensional data while balancing utility and privacy. This study introduces an innovative framework to improve the performance by leveraging distributed computing strategies. The framework integrates specific feature selection algorithms and distributed mutual information computation, which is crucial for sensitivity assessment. Additionally, it is optimized using a hyperparameter tuning technique based on Bayesian optimization, which focuses on minimizing either a combined score of the Bayesian information criterion (BIC) and Akaike’s Information Criterion (AIC) or by minimizing the Maximal Information Coefficient (MIC) score individually. Extensive testing on 12 datasets with tens to thousands of features was conducted for classification and regression tasks. With our method, the sensitivity of the resulting data is lower than alternative approaches, requiring less perturbation for an equivalent level of privacy. Using a novel Privacy Deviation Coefficient (PDC) metric, we assess the performance disparity between original and perturbed data. Overall, there is a significant execution time improvement of 64.30% on the computation, providing valuable insights for practical applications.